LLMs just don't understand

I agree with Grady's tweet:

But I also understand (at least I think, I think I do), that it's difficult to describe the subtleties of LLMs and their capabilities at language manipulation and automation without resorting to anthropomorphism at times.

So, I decided to manipulate some tokens using my word calculator to come up with some alternative terms:

Concept LLM CapabilityAlternate Terminology
HallucinationsProduces outputs that don't align with established reality, creating novel or unusual responses.Novelty Creation
ReasoningProvides outputs that follow a logical structure based on the context of the input.Logic Simulation
UnderstandingGenerates contextually relevant outputs by processing the semantic structure of the input.Contextual Response
LearningProduces new responses by applying patterns acquired during training.Pattern Use
MemoryUses information embedded in training data to generate responses that seem contextually appropriate.Data Recall
JudgementProduces outputs that appear to evaluate or weigh different factors or options.Evaluation Simulation
Decision-makingSelects the most fitting output from a range of possibilities based on the input.Optimal Selection
ReadingProcesses and understands the structure and semantics of written language.Text Interpretation
WritingProduces written content that follows grammatical rules and matches the context of the input.Text Production
ImaginationGenerates narratives or scenarios that extend beyond the given input.Scenario Creation
ThinkingGenerates outputs that seem to display a thought process, achieved by pattern matching based on training data.Simulated Thought Process
TranslationConverts a piece of text from one language to another, maintaining the meaning and context.Language Conversion
ListeningIn the context of voice recognition software, transcribes spoken language into written text.Audio Transcription
SpeakingIn the context of text-to-speech software, converts written text into spoken language.Speech Synthesis

I honestly don't like them because they are clunky and hard to remember.

But I now have a blog article to point to when I see smart people arguing over definitions.

Subscribe to Ian Maurer's Notes

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.