MarketAlert – Real-Time Market & Crypto News, Analysis & AlertsMarketAlert – Real-Time Market & Crypto News, Analysis & Alerts
Font ResizerAa
  • Crypto News
    • Altcoins
    • Bitcoin
    • Blockchain
    • DeFi
    • Ethereum
    • NFTs
    • Press Releases
    • Latest News
  • Blockchain Technology
    • Blockchain Developments
    • Blockchain Security
    • Layer 2 Solutions
    • Smart Contracts
  • Interviews
    • Crypto Investor Interviews
    • Developer Interviews
    • Founder Interviews
    • Industry Leader Insights
  • Regulations & Policies
    • Country-Specific Regulations
    • Crypto Taxation
    • Global Regulations
    • Government Policies
  • Learn
    • Crypto for Beginners
    • DeFi Guides
    • NFT Guides
    • Staking Guides
    • Trading Strategies
  • Research & Analysis
    • Blockchain Research
    • Coin Research
    • DeFi Research
    • Market Analysis
    • Regulation Reports
Reading: When natural language feels unnatural
Share
Font ResizerAa
MarketAlert – Real-Time Market & Crypto News, Analysis & AlertsMarketAlert – Real-Time Market & Crypto News, Analysis & Alerts
Search
  • Crypto News
    • Altcoins
    • Bitcoin
    • Blockchain
    • DeFi
    • Ethereum
    • NFTs
    • Press Releases
    • Latest News
  • Blockchain Technology
    • Blockchain Developments
    • Blockchain Security
    • Layer 2 Solutions
    • Smart Contracts
  • Interviews
    • Crypto Investor Interviews
    • Developer Interviews
    • Founder Interviews
    • Industry Leader Insights
  • Regulations & Policies
    • Country-Specific Regulations
    • Crypto Taxation
    • Global Regulations
    • Government Policies
  • Learn
    • Crypto for Beginners
    • DeFi Guides
    • NFT Guides
    • Staking Guides
    • Trading Strategies
  • Research & Analysis
    • Blockchain Research
    • Coin Research
    • DeFi Research
    • Market Analysis
    • Regulation Reports
Have an existing account? Sign In
Follow US
© Market Alert News. All Rights Reserved.
  • bitcoinBitcoin(BTC)$67,401.000.81%
  • ethereumEthereum(ETH)$1,957.92-0.50%
  • tetherTether(USDT)$1.00-0.01%
  • rippleXRP(XRP)$1.42-0.60%
  • binancecoinBNB(BNB)$608.100.18%
  • usd-coinUSDC(USDC)$1.000.00%
  • solanaSolana(SOL)$83.171.84%
  • tronTRON(TRX)$0.2833921.25%
  • dogecoinDogecoin(DOGE)$0.0990350.62%
  • Figure HelocFigure Heloc(FIGR_HELOC)$1.03-0.27%
Learn

When natural language feels unnatural

Last updated: November 16, 2025 11:15 pm
Published: 3 months ago
Share

Natural language processing has been part of computing for decades. The research lineage goes back to ELIZA, SHRDLU, and statistical machine translation labs long before deep learning. But the moment natural language became a _consumer_ interface — the moment it felt like something the general public could hold in their hand — was the release of ChatGPT in late 2023. I mark that moment the way I mark the early 90s as the birth of the consumer internet: not when the technology became possible, but when ordinary people suddenly had access to it.

Linus Lee once said, “In the aftermath of new foundational technology emerging, there is often a period of a few years where the winners in the market get to decide what the interface paradigm for the technology is.”

We are still in that period. The early dominant form factor — chat — was not inevitable. It was simply the first workable abstraction, the one that could showcase the capability without forcing people to learn a new instrument. But the longer I work with AI, the more I’m convinced we’re only in the first generation of interaction design. “Chatbots won’t be the future” is too blunt, but it’s pointing at something important: natural language is powerful, but it’s also incomplete as an interface layer.

Before we try to replace chat, we should understand where it excels, where it breaks down, and where designers might look for the next set of interactions.

Chat won because it was already familiar. It didn’t require new conceptual scaffolding. A blank text box and a blinking cursor — people already knew what to do with that. For the companies’ shipping models, chat also had another advantage: it allowed the system to be general. No dedicated editors, no domain-specific tools, no new affordances to teach. Just “type what you want.”

It was also convenient for vendors. A chat interface collapses every capability — search, summarization, tutoring, writing, analysis, code generation, planning — into the same input field. The product surface area is tiny. The model’s surface area is enormous. The mismatch is part of the magic and part of the frustration.

Amelia Wattenberger’s essay captured this well: chat interfaces hide capability. They give no hints about what’s possible, no guidance about scope or boundaries, and no reliable signals about what the model is good at vs. merely willing to attempt. It is the opposite of discoverable design.

But before we throw chat away entirely, it’s worth being clear about the places where natural language really does shine.

When you’re moving between questions, references, half-formed ideas, and loosely defined research threads, natural language allows you to keep thinking fluidly. Instead of translating intent into a rigid syntax, you can stay in the conceptual layer. Most of my own day-to-day work falls into this category: asking about prior art, turning meeting notes into a plan of attack, or having an AI assistant evaluate an idea from multiple angles. Knowledge work is naturally discursive. Language fits.

This is the surprising one, but the research backs it. The “naturalness” of software — the statistical predictability and repetitive structure of code — makes it easy for models to autocomplete, transform, and refactor. Tools like Cursor have pushed this further: pairing a conversational layer with a structured code editor gives you a hybrid surface where language can guide the model while the environment keeps things grounded. It is not quite chat; it is not quite an IDE; it is something in between.

In both domains — knowledge work and code — the key is the same: the medium can tolerate ambiguity because the user can quickly verify or course-correct. The stakes of misinterpretation are lower.

Language is flexible, but flexible does not mean precise. Most people who use LLMs regularly have experienced a variation of the following:

The problem isn’t that the model is incapable of understanding the request. The problem is that the interface is too ambiguous for the level of precision the user expects. Language carries nuance, but it also carries room for misinterpretation. The more specific the adjustment, the more unnatural it feels to communicate it through free-form text.

This is the same tension that existed in the early days of computing. The command line is extremely expressive, but it exposes the user directly to the consequences of precision. The GUI emerged because the CLI depended on memory: users had to recall the exact commands, flags, and directory paths for everything they wanted to do. The GUI turned those invisible structures into visible controls. It shifted computing from recall to recognition. We are living through the same transitional moment.

We don’t need to reinvent every pattern just because the underlying system is new. A button is still a button. A toggle still communicates a binary state. A slider still expresses a range. These controls were developed through decades of trial-and-error, painful usability research, and observed human behavior.

But AI complicates these controls because the outputs are no longer deterministic. A button labeled “Summarize” now might behave slightly differently depending on context, model state, or the user’s last few sentences. The control remains simple; the behavior becomes more variable.

This is why many people are stuck in the “skeuomorphic” phase of AI. They are learning that behind every button is not a function but an agent. You see this in the way non-expert users approach chat: they poke at it, not quite sure what’s safe to ask, not sure what the system can or cannot do. The interface is friendly, but the mental model is missing. As a result, the user is constantly over- or under-specifying their request.

Long before AI, UI controls existed to constrain and guide. They still matter. The question is how we adapt them when the underlying system is no longer deterministic.

Every new foundational technology goes through a phase where its interactions imitate whatever came before it. Steve Jobs talked about this in his 1983 Aspen design conference talk, noting how the first television programs looked like “radio shows with a camera pointed at them.” Early web pages looked like printed magazines. Early mobile apps were just shrunken desktop windows. Early touch interfaces carried over faux-3D buttons from physical controls. It’s a normal part of a medium figuring out its native grammar.

Designing new interactions for AI requires a mindset closer to Xerox PARC than to web design circa 2010. Mixed-mode inputs, in-context editing, multimodal hints, ephemeral controls, dynamic UI generation, “follow-up” tools that exist only when needed, and agentic workflows that live next to your working materials instead of inside a chat window — all of these are fragments, but nothing yet feels definitive. Yet, we must push forward and explore. Someone needs to propel us forward.

At one point, the command line interface (CLI) looked like the final form of computing. It was powerful, expressive, and — if you were initiated — fast. People genuinely believed it was the interface of the future. Then, in December 1968, Douglas Engelbart walked onto a stage in San Francisco and gave The Mother of All Demos.

Engelbart didn’t reject the CLI. He showed that computing could operate at a different altitude. The demo stitched together interactions no one had ever seen assembled in one place:

None of these invalidated the CLI. Developers still rely on it today. What Engelbart proved was that computing needed a second grammar — one built around direct manipulation, spatial reasoning, and shared context. The GUI wasn’t inevitable until he made it visible.

AI is in an equivalent phase. Chat is our modern CLI: maximally expressive, but dependent on people knowing what to say and how to say it. Prompting is powerful in the way typed commands were powerful, but it asks too much of too many users. Too much of the system remains invisible. Most people don’t want to learn the dialect of a model any more than they want to memorize command syntax.

What we don’t have yet is our Engelbart moment for AI — an integrated demonstration where intelligence, context, and interaction share the same space. Something that shows how multimodal input, contextual affordances, dynamic views, and shared human-machine environments can operate together instead of in isolation.

Right now, we’re working with fragments: chat for exploration, buttons for determinism, agents for workflow, early multimodal experiments, AI-native editors that hint at new patterns. None of it has settled into a single grammar. That’s why this era feels unfinished.

The next breakthrough won’t be a better prompt. It will be a new interaction language — something as self-evident to future designers as direct manipulation became after 1968. Someone is going to give us the next Mother of All Demos for AI. The pieces are on the table; they just haven’t been assembled yet.

We’re in the “command line” era of AI. Powerful, but incomplete. Expressive, but blunt. The GUI moment hasn’t arrived yet.

Interfaces are not a choice between chat and not-chat. They’re a spectrum of abstractions that must be tuned to the work at hand. Natural language is powerful in some contexts and deeply insufficient in others. Deterministic controls offer clarity, but they can’t express the full range of model behavior. The right answer is rarely the extreme. The right answer is the right tool for the job.

Somewhere in this mix — language, tools, affordances, agents, multimodality — there is a breakthrough waiting. A moment on the level of pull-to-refresh or the Mother of All Demos, where the interface teaches the world a new grammar for interacting with intelligence.

We haven’t seen it yet. But someone is going to show it to us.

Read more on proofofconcept.pub

This news is powered by proofofconcept.pub proofofconcept.pub

Share this:

  • Share on X (Opens in new window) X
  • Share on Facebook (Opens in new window) Facebook

Like this:

Like Loading...

Related

Google Rolls Out Gemini AI to TVs as Big Tech Pushes AI Everywhere – TipRanks.com
Don’t Let AI Do Your Thinking: Preserving Human Creativity in Coding
Tech-Powered Forex Trading Strategies: How Algorithms and Tools Are Shaping the FX Landscape
Bitcoin: Smart Money Accumulating Or Final Bull Trap Before a Crash?
Earnings call transcript: Intchains Group Q3 2025 sees stock dip post-earnings By Investing.com

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Email Copy Link Print
Previous Article Landman season 2 premiere: Is Tommy’s mother dead?
Next Article Genealogy Products & Services Market Expected to Hit $13.56 Billion by 2034 | CAGR: 11.5% As Revealed In New Report
© Market Alert News. All Rights Reserved.
Welcome Back!

Sign in to your account

Username or Email Address
Password

Prove your humanity


Lost your password?

%d