Age | Commit message (Collapse) | Author |
|
don't download it. keep it in lbmk.
libreboot moved to codeberg for git hosting,
and i didn't want to keep lugging around an
extra git repo just for one tiny project.
|
|
|
|
|
|
|
|
not just sloccount, but compiled binary size as
tested with tcc on an x86_64 host
|
|
i went too hard on the sloc reductions
a check inside a for loop could cause
incomplete reading of gbe images
revert that
|
|
also removed some unnecessary checks
fixed the check of pwrite's return value
(it should check for -1)
|
|
|
|
added a few that were more useful
deleted a few obnoxious ones
|
|
the byteswap() function is used for big endian host
compatibility, but it can also be used to swap words
in the stored mac address
|
|
|
|
|
|
these were put in when i was testing the feature to
limit read/written bytes in loading/saving of files
|
|
the code is smaller
|
|
|
|
word/setWord no longer mitigates endianness. instead,
all bytes are swapped after reading and before writing the
file, and only if the host is big endian
this improves performance on little endian hosts, which is
most machines, and the code is much simpler, so it's more
robust and less likely to break
mac address endianness made more clear in code, including
with a comment that explains it
(the nvm section contains little endian words, *except* the
mac address whose words are stored big endian)
|
|
it was resetting the total for each nibble. absolute
epic fail on my part.
fixed now.
|
|
|
|
|
|
reduce the number of calls to read() by using
bit shifts. when rnum is zero, read again. in
most cases, a nibble will not be zero, so this
will usually result in about 13-15 of of 16
nibbles being used. this is in comparison to
8 nibbles being used before, which means that
the number of calls to read() are roughly
halved. at the same time, the extra amount of
logic is minimal (and probably less) when
compiled, outside of calls to read(), because
shifting is better optimised (on 64-bit machines,
the uint64_t will be shifted with just a single
instruction, if the compiler is decent), whereas
the alternative would be to always precisely use
exactly 16 nibbles by counting up to 16, which
would involve the use of an and mask and still
need a shift, plus...
you get the point. this is probably the most
efficient code ever written, for generating
random numbers between the value of 0 and 15
|
|
|
|
|
|
the way nvmutil is designed, setWord() is only ever called
under non-error conditions. however, if one part is valid but
the other one isn't, and a command is run that touches both parts,
errno is non-zero write writeGbeFile is called
in situations where one part is valid, but the other isn't, AND the
writes to gbe (in memory) results in a non-change, writeGbeFile is
not called; in this situation, errno is not being reset, despite
non-error condition
this patch fixed the bug, resulting in zero status upon exit under
such conditions
|
|
|
|
the current code writes part 1 first, and part 0 next,
on the disk, due to the way the swap works.
with this change, swap still swaps the two parts of the file,
on disk, but writes the new file sequentially.
this change might speed up i/o on the file system, on HDDs.
on SSDs, this change likely makes no difference at all.
|
|
|
|
|
|
don't use malloc(). instead, just load random bytes
into a uint64_t
|
|
this will make the code more flexible, if (when) i
add changes that allow multiple commands to be used
in a single run, on any given number of files
|
|
|
|
|
|
|
|
only read the required number of bytes, per command
|
|
On many Lenovo GbE regions (in factory firmware), part 0 is
invalid but part 1 is valid.
This change means part 1 is checked first. If part 1 is valid,
part 0 won't be checked at all (due to how most C compilers
optimise).
Most people are just going to extract the factory GbE file,
modify it and re-insert it into the ROM image, so this causes
a nice speedup.
|
|
don't constantly open/close the file: /dev/urandom
only read 12 bytes at a time
because of this change, the readFromFile() function now only
handles gbe files
|
|
|
|
|
|
Massive reduction in number of bytes written, if copy/swap
commands are not used.
|
|
|
|
Old behaviour: always write both gbe sections.
New behaviour: only write back what was changed.
|
|
|
|
i didn't like the previous commits, they felt really hacky
running malloc and then changing the pointer directly just rubs
me the wrong way
fix that
|
|
don't do xor swap. we know gbe2 is always 4KB higher than
gbe in memory, so we can just set gbe2 to the value of gbe,
and OR the size in bytes of 4KB into gbe2
this is only a marginal speed boost, negligible even, but it's
done for the lulz
|
|
similar to the last change by concept. we now write
individual 4KB blocks per part 0 and 1, at the end
of nvmutil, based on pointer values gbe and gbe2
instead of running memcpy, simply overwrite the pointer
this results in less I/O, thus more speed
|
|
instead of XOR-swapping every byte, have pointers to the
two parts and *XOR swap the pointers*. at the end of the
program execution, when writing, pwrite the two parts into
the same file
|
|
|
|
|
|
|
|
Without this change, arbitrary MAC addresses will always be masked.
This change restores the intended behaviour.
|
|
|