
ScotlandIS’ annual Tech Trends event returned with Richard Marshall laying bare the flaws of modern technology – from AI hype and messy data to unreliable systems – offering practical strategies for repair without throwing the baby out with the bath water.
Richard Marshall set a stark tone at this year’s ScotlandIS Tech Trends event.
Hosted at Burness Paull’s Glasgow office – the second of a three night run that also takes in Edinburgh and Aberdeen – Marshall leaned in to a gleefully nihilistic theme of “the future is broken”, urging a practical audit of modern technology’s flaws – aka, “how do we fix it?”
Looking back at some of the biggest stories published on DIGIT over the last 12 months, it’s hard to argue with the severity of Marshall’s assessment.
Mass IT outages have affected global systems – from seemingly benign but cack-handedly rolled out updates to outright targeted attacks. AI, for all its unlimited potential for good (see applications in healthcare diagnostics, climate modelling and industrial optimisation), also offers extraordinarily frictionless pathways to base and abhorrent use-cases.
The digital divide remains a serious concern – especially as emerging technologies slowly subsume certain skill requirements and demand competencies that are becoming increasingly hard to acquire for those outside well-resourced environments.
All this and more is explored below as we dip into highlights from Richard Marshall’s Tech Trends talk, where he “looks at areas where things are broken and suggests some ways that we can fix them and throw in some fun stuff along the way.”
AI fragmentation and misplaced hype
Marshall’s opening lines set the tone: not an ode to model-capability races, but a reality check about how practitioners actually use the tools. He walked the audience through a typical, messy stack of assistants and said what many technologists quietly accept — different models are simply better for different jobs.
Marshall went on to ground these tool-level observations by taking a look at Anthropic’s recent Economic Index report(s).
In these reports, Marshall says that “[Anthropic] cut their forecast productivity improvement from 1.8% down to 1% for GDP. And they’ve also put out these rather depressingly accurate summaries about the impact of AI on their productivity.”
He read the metrics straight to the room: “The overall conclusion is that AI plus humans is much more effective than AI on its own. So if you’ve got a really complicated process that takes 10 hours of work, they’ve got a very low success rate, 35% success rate for those. And even for the less complex things… it’s still only about 60% success rate on the AI support. So this is dramatically cut down from their numbers last year.”
Those figures shape his prescription: if the empirical reality is that “AI plus human” is the winning combination, then organisations must stop treating models as replacements and begin treating them as colleagues – narrowly scoped, supervised, and measured.
Marshall also flagged the practical, operational hazards of early adopter projects. “One of the other statistics that came out in this last couple of weeks… was the fact that people have been addressing what’s called the AI hangover, which is if you started building your AI system two years ago, you’re going to have to completely throw them away because everything’s changed. It’s kind of a problem.”
He pushed beyond blunt scepticism to sketch out how teams should respond. Track emerging standards rather than making irreversible choices: “We need to start doing the same thing at every level. MCP, Model Contact Protocol, is a start in that direction.
Beyond this, practitioners should focus on using AI for clearly defined, supervised tasks and avoiding the temptation to plaster “AI” across every business process.
Unreliable systems and tangled architectures
Marshall moves from trend-spotting to the more everyday grievances we all experience – bank login failures, crashed apps, and the quiet rot of production systems.
Marshall posits that the problem is not only that software fails, but that nobody understands the stack well enough to fix it, saying: “We have a lot of existing software and infrastructure sitting on top of existing data, and it was never designed to do any of this stuff… we actually have a real problem because this layer in the middle isn’t really very reliable.”
He drills into the API and architecture mess that creates fragility: “APIs haven’t been updated in years, so there’s a lot of missing functionality” (While many APIs are actively maintained, large organisations often rely on legacy or internal APIs that remain in use for years with limited updates, contributing to system complexity.)
Marshall expands on his point, saying that there can be “15 different versions of an API set and widespread inconsistency. If you have a microservices architecture with 6,000 APIs – and they do exist – that is simply too complicated for most people to understand.”
Marshall highlights subtler failures too: error handling that simply “fails silently,” and “zombie systems” where source code is lost and nobody dares to switch off a rack of servers found behind a partition wall (an apparent true story for one unfortunate company).
Further, documentation, he says, is often worse than useless: “It’s probably more misleading to have wrong documentation – as is often the case – than no documentation.”
So what to do? Marshall suggests active investment in an architectural vision that accepts change, improve testing and exception handling, and prioritise observability so failures are noticed, not whispered about until they become crises.
Messy and risky data, amplified by AI
“Ninety percent of enterprise data is unstructured… 82% of files have at least one major inaccuracy on them… data is a big mess and we need to fix it because AI is a huge amplifier of problems. It is going to yell your errors out loud.”
Marshall gives one of the clearest metaphors of the evening: treat data like a supply chain. If logistics firms can model inventory and weather and pirates, he asks, why are organisations so lax about where their data comes from and how fresh it is? The consequence of sloppiness is amplified by AI: “If you don’t fix them first… it’s going to propagate through all of those other systems.”
Beyond this, we are operating in one of the most unpredictable geopolitical environments in recent memory. Trust is visibly eroding, and consideration for long-term consequences – beyond short-term gain or the placation of ego – appears increasingly scarce among certain world leaders and this can have severe consequences for businesses relying on digital infrastructure and data being hosted away from home.
The remedy he offers is practical: “discovering and classifying your data sources and managing them actively in the same way that people manage their supply chains… set up those standards… we need to know where it’s coming from as well.”
He presses the legal angle too: “We also have to talk about data sovereignty… it’s time to review it. Where is your company data? Is it local?”
Marshall suggests that businesses take a look at alternative suppliers they could fall back on. He mentions Tech Made in Europe. Billed as ‘Europe’s Tech Sovereignty Catalogue’, the site aggregates vetted vendors across cloud, connectivity, cybersecurity, data and AI, deliberately surfacing companies that develop and host software within Europe so organisations can make clearer sovereignty and compliance choices.
The platform has also introduced labels such as “Software Made in Europe” and “Software Hosted in Europe” to increase transparency about where software is developed and where data is stored.
Wholesale change isn’t required (or practical) but running small pilots with locally hosted or regionally certified suppliers – and recording those suppliers in a risk register alongside existing third-party dependencies – maps directly on to Marshall’s risk-management guidance.
It’s by no means a silver bullet, but it is a pragmatic way to hedge geopolitical or supply-chain shocks, reduce the legal ambiguity around data sovereignty and test how well alternatives integrate with your existing stack before you need them.
The crisis of fakes and rising attack complexity
“If you’re seeing fake news and you’re basing your business decisions on it, you could make a very bad decision. It creates cybersecurity risk. I think we all know about the AI-powered creation of viruses and phishing attacks and all these other amazing tools. But there are whole companies out there that are dedicated to creating viruses and attack vectors and they use AI to accelerate the process. AI-generated tools are very, very effective for it.”
Marshall walks the room through the new business threats that emerge when synthetic content scales.
It is not just a nuisance: fake briefs, fabricated precedent, and AI-generated voice clones can cause legal, reputational and financial damage, citing one case in which a voice sampler was used to replicate a company’s CEO – convincingly to the point where the fraudsters were able to have the company transfer money to them.
Marshall also describes the social engineering that has become industrialised: “pig butchering – a tactic in which the fraudsters (who lean on forced labour) spend weeks trying to build a relationship up with you… eventually they have a problem, they’ll start saying, Oh, I’ve just discovered this crypto thing, can you invest in this?” The techniques are adaptable: the same trust-building methods that trick retail victims can be turned on sysadmins to harvest credentials.
So how does Marshall propose firms respond? Well, for starters, detection and provenance matter: he pointed to provenance for media (digitally signed images… not to be confused with NFTs, mind). He illustrated the importance of sensible alerting with an anecdote: an innocuous customer call to ScrewFix Direct triggered a handset warning – “Potential scam!” – that proved useful even when it was a false alarm, so more of that, please.
On simple process fixes, Marshall urged small, practical barriers to fraud: “having a keyword that only the two people who are talking will know” – a tiny procedural tweak that raises the cost for spoofers.
Above all, he insisted on verification and scepticism: “The AIs like to please you as a user… They’ll make up stuff to support what you wanted to do, but you have to check it through.” Education, he argued, remains a first-line defence – teach staff to spot synthetic content, make verification routine, and bake simple authentication steps into day-to-day processes.
Marshall’s prescription is not glamorous, but persistent human verification, as inconvenient and onerous as it can be sometimes, is more paramount than ever. As he puts it, if you don’t check it through, you risk very real legal, reputational and financial damage.
Skills shortages, human augmentation and tech inequality
Marshall closed by reminding the room that all the technology fixes rely on people. The twin problems are a lack of appropriate, applied training and an alarming hiring gap.
He says: “Companies are not hiring junior staff… management and accountants… think that they’re going to be able to save money by replacing juniors with AI. It’s not going to work. You really need experienced workers and knowledge.”
The solution is not to freeze at theory. Train people on the tasks they will actually perform – prompt engineering, prompt triage, model validation – and pair them with augmentation systems that teach on the job. “There’s a lot of very cool systems out there where… the AIs are detecting anomalies and they’re presenting them to people… it actually trains the cybersecurity staff in how to respond to incidents. So it’s not just replacing them, it’s actually teaching them how to do it.”
Though of course, this is a method that would need its own set of checks and balances from a human authority to ensure that what’s being taught has not just been dreamt up.
Recommended reading
Marshall also warns of widening inequality: firms that can invest in talent and clean data will accelerate away from those that cannot, creating a vicious circle of disadvantage.
The policy and commercial answer is predictable but urgent: invest in training, keep hiring at junior levels, and recognise that long-term capability is built through people, not shortcuts – and it’s probably worth dovetailing this point to Marshall’s earlier one about AI hangovers: Any AI system you implement now, while doubtlessly useful, will likely need overhauling in a few years. Not so if you invest in humans.
Repair, not reinvention
While it would have been easy for Marshall to extol the virtues of one of the most fast-moving periods in recent tech history, he chose the much more onerous task of refusing the easy headlines about transformative tech and instead offered a checklist of brittle foundations: fractured AI use, oversold XR, unreliable systems, dirty data, synthetic fakes and a skills shortage that will make repair hard.
His prescription for our technological maladies is practical: invest in architecture, treat data as a supply chain, test backups and disaster recovery plans, train staff in applied AI and response, and choose evolution over wholesale rewrites.
Basically, if we want to build cool new stuff, we have work to do.

