I am currently trying to implement an algorithm to select a unique (16-bit) identifier. The challenge is to do this in an fast way that doesn't use too much memory. The list of currently used identifiers is determined through scanning an external Flash device via a sequence of SPI transactions and is therefore a relatively slow process. Also, the algorithm will be running on a small-ish microcontroller, so I can't really just read all the entries into RAM and process them there.
The thoughts I've had so far are:
- Pick a number, then scan through the list and see if it's used. Rinse and repeat. Suffers from being rather slow (particularly if there are a lot of files).
- As above, but pick the number using a pseudo-random number generator with an appropriate seed. This has the advantage that it's less likely that there will be such large numbers of iterations.
- Scan through the list and populate an array with all the entries found. Sort it and then it becomes trivial. This could use an enormous amount of memory.
- Use an enormous (okay, ridiculously enormous) bit mask. Not really practical.
- Accept that the life-time of the tool is such that it will be thrown away or 'formatted' long before it has written 65534 files to the Flash, so just store the highest value used so far in the Flash or Backup memory and keep incrementing. In all honesty, this would probably work quite well for this specific application.
At the moment, I'm verging towards using either the second one or the fifth, but I'd be interested to know if anyone has any other thoughts. I'd like to think that there's an algorithm similar in form to a CRC that could be used to process each number in turn and give a fair idea of a number that hasn't been used, but I've no idea how this might work.