MarketAlert – Real-Time Market & Crypto News, Analysis & AlertsMarketAlert – Real-Time Market & Crypto News, Analysis & Alerts
Font ResizerAa
  • Crypto News
    • Altcoins
    • Bitcoin
    • Blockchain
    • DeFi
    • Ethereum
    • NFTs
    • Press Releases
    • Latest News
  • Blockchain Technology
    • Blockchain Developments
    • Blockchain Security
    • Layer 2 Solutions
    • Smart Contracts
  • Interviews
    • Crypto Investor Interviews
    • Developer Interviews
    • Founder Interviews
    • Industry Leader Insights
  • Regulations & Policies
    • Country-Specific Regulations
    • Crypto Taxation
    • Global Regulations
    • Government Policies
  • Learn
    • Crypto for Beginners
    • DeFi Guides
    • NFT Guides
    • Staking Guides
    • Trading Strategies
  • Research & Analysis
    • Blockchain Research
    • Coin Research
    • DeFi Research
    • Market Analysis
    • Regulation Reports
Reading: I thought AI and LLMs were dumb and useless until I self-hosted one from home
Share
Font ResizerAa
MarketAlert – Real-Time Market & Crypto News, Analysis & AlertsMarketAlert – Real-Time Market & Crypto News, Analysis & Alerts
Search
  • Crypto News
    • Altcoins
    • Bitcoin
    • Blockchain
    • DeFi
    • Ethereum
    • NFTs
    • Press Releases
    • Latest News
  • Blockchain Technology
    • Blockchain Developments
    • Blockchain Security
    • Layer 2 Solutions
    • Smart Contracts
  • Interviews
    • Crypto Investor Interviews
    • Developer Interviews
    • Founder Interviews
    • Industry Leader Insights
  • Regulations & Policies
    • Country-Specific Regulations
    • Crypto Taxation
    • Global Regulations
    • Government Policies
  • Learn
    • Crypto for Beginners
    • DeFi Guides
    • NFT Guides
    • Staking Guides
    • Trading Strategies
  • Research & Analysis
    • Blockchain Research
    • Coin Research
    • DeFi Research
    • Market Analysis
    • Regulation Reports
Have an existing account? Sign In
Follow US
© Market Alert News. All Rights Reserved.
  • bitcoinBitcoin(BTC)$77,120.001.75%
  • ethereumEthereum(ETH)$2,345.051.47%
  • tetherTether(USDT)$1.00-0.01%
  • rippleXRP(XRP)$1.440.99%
  • binancecoinBNB(BNB)$636.561.07%
  • usd-coinUSDC(USDC)$1.000.01%
  • solanaSolana(SOL)$87.092.08%
  • tronTRON(TRX)$0.3336671.62%
  • Figure HelocFigure Heloc(FIGR_HELOC)$1.030.36%
  • dogecoinDogecoin(DOGE)$0.0959470.72%
NFTs

I thought AI and LLMs were dumb and useless until I self-hosted one from home

Last updated: August 14, 2025 4:00 pm
Published: 8 months ago
Share

I’ll be the first to admit I never liked the first wave of artificial intelligence and large language models (LLMs) when they first came about. There’s still an argument as to how to properly categorize these smart assistants. Are they AI or are they simple scripts with a vast knowledge pool to use for constructing replies? Where is the line in the sand for transitioning from a simple tool to artificial intelligence? Regardless of where you sit on the fence, LLMs are everywhere now. You’ve seen it with Copilot, the whole AI PC marketing buzz, and now more people are self-hosting them from home than ever before.

Up until now, you could place me firmly within the camp of those not using LLMs (or much in the way of “AI” as people seem to refer to it these days) and certainly not one to self-host them from my home lab. Well, that all changed after I finally got around to toying with Ollama and Open WebUI.

My disdain for AI (and LLMs)

I just didn’t see the point

The average person doesn’t need an LLM to complete tasks. Sure, you can have your email app summarize an email for you, but why not read through it? The same goes for Copilot on Microsoft Windows. As soon as these AI-focused features were announced for macOS, iOS, Android, and Windows, I started wondering what people would actually use them for. Turns out not so much. The Copilot launch was disastrous, and everyone poked fun at Apple for its iPhone Apple Intelligence marketing. It wasn’t that I didn’t believe they had a place, but rather it just wasn’t the time.

Is Microsoft Edge my AI-powered browser? I think not.

Like many new waves within the tech sector (hello, NFTs!), the sudden rollout of AI on hardware and software took many by surprise. Everyone wanted to shove even more AI into their branding than ever before. We’ve got AI in Windows, AMD Ryzen AI CPUs, the famous ChatGPT, and more. Even Heinz used AI within its own marketing … for tomato ketchup. I quickly grew tired of the fad, as I’m sure many others did. It was becoming the next big thing that no one really understood, but wanted to push onto unsuspecting consumers because profit matters above all else.

It always felt like AI was being pushed as a solution to a problem we had yet to discover. If you’re using iOS or Windows right now, how are you making use of the available AI features? Has your life drastically improved since making the switch and dabbling in the world of smart assistants? Does Spotify need to show which features make use of smart tools? Probably not, and it’s something we’ll likely see fade into the dustbin of history. The same goes for Google, which now has AI at the forefront, next to the same search results page that made it become the conglomerate it is today.

Don’t get me wrong, using ChatGPT can be a positive. Need to know something quickly without hunting down a specific site through search engines? ChatGPT can provide a response within seconds. But how do we take it beyond simple AI image generation and answering queries? That’s what I struggled to understand with regards to the average person who would more likely finally clean out their dusty PC than make full use of artificial intelligence. Is this big push through the tech sector simply a way to turn us all into floating husks from Wall-E?

Then there’s the whole issue of ethics, reliability, bias, security, and economic impact of using LLMs. Running these models and similar technologies also requires a lot of power, which is something that has been covered a few times in the press. It’s a confusing period since it feels like generative AI, which is primarily what we can interact with today, didn’t simply appear overnight. It has been a lengthy process to get to where we are, even though the marketing machine has taken the term and run with it for absolutely everything.

Looking to purchase an all-in-one (AIO) PC? That’ll now be an AIO AI PC. Sounds great, right?! The problem we all have, as consumers — and as tech media, until we do some digging, is that it’s never clear just what is AI-powered. Take a service you’ve been using for years. A recent update and rebranding now place AI at the forefront, plastering its tools everywhere, but are they really different from existing functionality? Is your favorite video editor that much more versatile as to have a measurable impact on your workflow? Do we really need summaries for emails?

Self-hosting LLMs changed everything

That’s when it all clicked

I don’t have anything AI-enabled, be it my PC, phone, or fridge. Everything has its place, but I am fleshing out the home lab, self-hosting as much as possible, and slowly making things smarter throughout the home. A central piece of this puzzle is Home Assistant. This handy platform brings everything together in a tightly knitted package with a plethora of integrations, community plugins, and more. It can be used to improve various aspects of your daily routines, from automatically turning on lights and sockets to managing your entire security system.

That’s when I got the idea from my esteemed colleague Adam Conway to self-host an LLM specifically for Home Assistant and Frigate. Turning to Proxmox, I put together an old system consisting of an Intel Core i7-10700K, Nvidia GeForce RTX 3060, and 32GB of DDR4-3200 RAM. Nothing too powerful, but more than enough to handle a 14b LLM without much delay on each request. Once Ollama and Open WebUI were up and running, that’s when it all clicked, and I finally saw the power (and versatility) an LLM offers, even if it’s not “true AI.”

Once Ollama and Open WebUI were up and running, that’s when it all clicked, and I finally saw the power (and versatility) an LLM offers.

With a model running on this newly formed server, I could link it up with Home Assistant Voice, taking the load off the HA Proxmox node, and take advantage of the available GPU for faster processing and considerably better results. The same goes for Frigate. With five 1080p security cameras, the local machine was happily chugging away with an Intel Xeon E3-1245v2 CPU and Nvidia T1000 GPU for handling all the detection and recording. But this again could be offloaded to the LLM for some really cool results, such as searching through text detected on vans, shirts, plates, and other captured imagery.

I now have a server that can run models for image processing, handle home automation, and be ready to handle any individual requests. This can be anything like checking over code, helping with ideation, or helping me solve a problem where I missed something relatively obvious. I now understand just how capable these advanced models are and what this means for our future as a species. Whether we can harness this power in responsible and useful ways remains to be seen, but I just wish all this were marketed better.

Why self-hosting AI matters

Keep tabs on everything

I self-host Immich because I don’t want some company managing my personal media. I use Jellyfin because I prefer to purchase movies and music and enjoy watching and listening to media whenever I want without paying a monthly subscription. I prefer Nextcloud over Microsoft 365 because it’s free and powerful enough for my needs. I prefer open-source and freely available software over that available from tech giants simply because I can enjoy tighter control over my data and how it’s used. Donating to these projects is far more satisfying than paying a monthly subscription for each individual service.

The same goes for Ollama, Open WebUI, and the plethora of LLMs available. These are fantastic when used properly and in conjunction with other systems. Firing up ChatGPT in your browser is convenient, but you’re using a remotely-hosted LLM on a website you do not control. Where does all this data go? That’s why I can now see the appeal of running LLMs from home because you have absolute control over how it’s used and don’t have to be concerned about privacy. It’s possible to enjoy all the performance that comes with running such models, so long as you have the power requirements to do so.

You can install and run your own LLM on a desktop or laptop PC.

If you haven’t already done so, give Ollama a try. You can run it on the same PC you may happen to be using to read this article. It’s easy to install and allows you to see how it can improve your life through information dumps and connecting to other services. Want to take things to the next level? Create a small home server with used PC parts and install Proxmox as the foundation for your new AI hub. Ollama and Open WebUI can be installed through a community script, essentially making this accessible to even those less tech-savvy.

Better still, you can provide the LLM with everything you have available and see if it can help you work out ways to leverage the underlying technology to create the ultimate smart home. That, or you can simply ask how best to make Eaton mess on a lovely summer’s day. I’ve had a blast hosting, interacting with, and finding more ways to use the LLMs at home, which I thought would never be the case.

Read more on XDA-Developers

This news is powered by XDA-Developers XDA-Developers

Share this:

  • Share on X (Opens in new window) X
  • Share on Facebook (Opens in new window) Facebook

Like this:

Like Loading...

Related

XRP, Ethereum, and Solana Might Miss Next Crypto Wave — Analysts Favor This Perfectly Timed Altcoin
MoonBull Launches Presale Whitelist as Community Interest Surges – Turbo Sees Volume Dip, Coq Inu Rallies
The Context 157: 💭 Forgotten Trump Crypto Projects
Crypto-assets As A Form Of Collateral
Solana Has Recovered 36% In 6 Days. This Solana-Based Project Is Bound To Follow Wave

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Email Copy Link Print
Previous Article What Happens if Ethereum Reaches $100,000? | Ethereum | CryptoRank.io
Next Article Which Cryptocurrency Is More Likely to Be a Millionaire Maker: XRP vs. Shiba Inu | The Motley Fool
© Market Alert News. All Rights Reserved.
Welcome Back!

Sign in to your account

Username or Email Address
Password

Prove your humanity


Lost your password?

%d