If I change the score in RAM before it's encrypted then none of this helps. That's the way the majority of cheaters seem to work atm anyway.
Additionally it's relying on Security through Obscurity which is largely seen as something that shouldn't be done as you don't know what the weaknesses of the security system are until they're found by people trying to break in (among others, read through http://en.wikipedia.org/wiki/Security_through_obscurity
It would make more sense to use one of the current open encryption routines with both a public key for encrypting and a private key for decrypting. It still suffers from the RAM problem but bypasses any additional chance of Dylan including his own security issues into the mix.
This does mean that someone can't decrypt and edit the scores files, however they could create their own score file instead if they manage to pull the private key and file structure out of Audiosurf Air. (As a note, they could do the same with the hypothetical 'Dylan-specific' encryption routine as well.)
This again all goes back to whether it's worth including all these things. The amount of work required to put in a solid, anti-cheat system would probably double the workload for Audiosurf Air. Do we really want to wait that long when the benefits aren't that
large? And it will still be broken anyway at which point someone releases a cheat app and you're back to square one with it.
I would suggest the only reliable way to make an anti-cheat system would be to have the entire track structure (block positions, not the music itself) uploaded with the score as well as a replay of where the character moved and have the track structure compared with some form of fuzz to detect how close it is to other tracks on the scoreboard (which would be hard to implement without kicking valid scores). This would be hard (but not impossible) to fake but would again require a lot of work from the dev for what is mostly a minor issue with a reasonable solution already in place (reporting, although we do need some more trusted individuals monitoring those reports
Sooo, can we stop trying to fix the impossible now?