Welcome

Hey Altman, look at my Github. Simply removing vowels from LLM’s training data could cut compute in training 19.23 percent or more. Even without training, current LLM’s are capable of reading vowel-less input without any problems, and with specific training the concept can be used to create ‘ciphers’ which remove even more of the data of a document while maintaining it’s inferable meaning. There’s an added bonus that these documents amplify the effects of traditional data compression by quite a lot, possibly reducing overheads here and there.

Design a site like this with WordPress.com
Get started