the validation of "AI" would imply human comprehension and lexical meaning of knowledge are GEOMETRICAL. This is an interesting view of it.
BTW, how large is the training corpus (Terabytes or Gigabytes) versus the stored "compressed" LLM weights of the running program (in Terabytes or Gigabytes)? It would be interesting to compare the two sizes!
A Short Note On Large Language Models
Love the Dewey-Goodman nexus. This is good stuff.
the validation of "AI" would imply human comprehension and lexical meaning of knowledge are GEOMETRICAL. This is an interesting view of it.
BTW, how large is the training corpus (Terabytes or Gigabytes) versus the stored "compressed" LLM weights of the running program (in Terabytes or Gigabytes)? It would be interesting to compare the two sizes!