MarketAlert – Real-Time Market & Crypto News, Analysis & AlertsMarketAlert – Real-Time Market & Crypto News, Analysis & Alerts
Font ResizerAa
  • Crypto News
    • Altcoins
    • Bitcoin
    • Blockchain
    • DeFi
    • Ethereum
    • NFTs
    • Press Releases
    • Latest News
  • Blockchain Technology
    • Blockchain Developments
    • Blockchain Security
    • Layer 2 Solutions
    • Smart Contracts
  • Interviews
    • Crypto Investor Interviews
    • Developer Interviews
    • Founder Interviews
    • Industry Leader Insights
  • Regulations & Policies
    • Country-Specific Regulations
    • Crypto Taxation
    • Global Regulations
    • Government Policies
  • Learn
    • Crypto for Beginners
    • DeFi Guides
    • NFT Guides
    • Staking Guides
    • Trading Strategies
  • Research & Analysis
    • Blockchain Research
    • Coin Research
    • DeFi Research
    • Market Analysis
    • Regulation Reports
Reading: Old Stories Revisited: Cybersecurity with New AI Twists
Share
Font ResizerAa
MarketAlert – Real-Time Market & Crypto News, Analysis & AlertsMarketAlert – Real-Time Market & Crypto News, Analysis & Alerts
Search
  • Crypto News
    • Altcoins
    • Bitcoin
    • Blockchain
    • DeFi
    • Ethereum
    • NFTs
    • Press Releases
    • Latest News
  • Blockchain Technology
    • Blockchain Developments
    • Blockchain Security
    • Layer 2 Solutions
    • Smart Contracts
  • Interviews
    • Crypto Investor Interviews
    • Developer Interviews
    • Founder Interviews
    • Industry Leader Insights
  • Regulations & Policies
    • Country-Specific Regulations
    • Crypto Taxation
    • Global Regulations
    • Government Policies
  • Learn
    • Crypto for Beginners
    • DeFi Guides
    • NFT Guides
    • Staking Guides
    • Trading Strategies
  • Research & Analysis
    • Blockchain Research
    • Coin Research
    • DeFi Research
    • Market Analysis
    • Regulation Reports
Have an existing account? Sign In
Follow US
© Market Alert News. All Rights Reserved.
  • bitcoinBitcoin(BTC)$74,562.00-1.56%
  • ethereumEthereum(ETH)$2,284.49-2.63%
  • tetherTether(USDT)$1.000.00%
  • rippleXRP(XRP)$1.41-1.42%
  • binancecoinBNB(BNB)$621.59-0.48%
  • usd-coinUSDC(USDC)$1.00-0.01%
  • solanaSolana(SOL)$84.54-1.37%
  • tronTRON(TRX)$0.3325331.34%
  • Figure HelocFigure Heloc(FIGR_HELOC)$1.040.00%
  • dogecoinDogecoin(DOGE)$0.094190-0.65%
Blockchain Technology

Old Stories Revisited: Cybersecurity with New AI Twists

Last updated: March 5, 2026 4:05 am
Published: 2 months ago
Share

We organize stories from this Survey year as follows: Section II describes disruptive cyber incidents at major retailers; Section III examines a cryptocurrency heist exploiting electronic transfer protocols; and, Section IV reviews the persistent misuse of unverified GenAI output for court submissions and peril for lawyers and judges.

In ransomware, past is prologue:

* 2014: Ransomware attack on Sony Pictures executed by North Korean sponsored Lazarus Group;

* 2017: Worldwide WannaCry ransomware attacks on over 200,000 companies (also attributed to Lazarus Group);

* 2021: Ransomware attack on Colonial Pipeline cut fuel supplies across the northeastern US;

* 2023: Ransomware attack on MGM Casinos and Resorts by Scattered Spider closed hotels on the Las Vegas Strip.

Vast reportage, financial harm (in ransom paid and business losses), and reputational damage to valuable corporate brands have not led to rigorously improved ransomware defenses or enhanced resiliency plans and preparations.

In April 2025, a breach infected the IT systems at UK department store Marks & Spencer (“M&S”) with ransomware, exfiltrated customers’ personal information, and set up a ransom demand directly to its CEO. The attack commenced over Easter weekend causing a cascade of disruptive transactional anomalies. M&S customers nationwide experienced glitches in M&S stores. Contactless card payments and “Click & Collect” order pickups abruptly failed. By Monday, April 21, contactless checkout and order collection systems were down across the chain.

M&S could not accept website orders from mid-April to mid-June 2025; M&S reverted to processing orders by pen and paper. Illustrating the breadth of disruption:

* the company anticipated its website might not return to pre-incident operation levels until mid-July 2025;

* the company had to “move billions of pounds of fresh food, drinks, and clothing after it switched off its automated stock systems;” and

* the company reported an anticipated “£300 million reduction to its annual profit due to the attack and a £600 million loss in its market cap.”

In late April, two other UK retailers reported disruptive cyberattacks: Co-Operative Group shut down parts of its IT systems “to fend off a hack” (apparently part of the same attack on M&S), and luxury department store Harrods “restricted internet access” to thwart intrusion attempts.

Lessons Learned: According to M&S’s CEO, hackers gained access through a social engineering stratagem. Conflicting accounts of the breach were reported. Initial reports said hackers impersonated a trustworthy third-party individual, tricking M&S employees into disclosing passwords or login access. That familiar tactic takes advantage of a gap between impersonators’ deception skills and the trust or inadequate diligence of unsuspecting or inattentive employees. A later report indicated impersonation of high-level corporate executives who “pressured tech-support workers to give them access to corporate networks.”

Both accounts suggest the attacks involved a human factor that hacker groups have learned to aggravate and exploit: multi-factor authentication (“MFA”) fatigue.

MFA fatigue refers to the frustration and annoyance users experience when constantly entering additional login credentials, such as one-time passwords sent via text message or an authentication app. MFA fatigue often leads users to disable MFA controls, creating security risks. . . .

[Hacker group] Dragon Forcs [sic] have a special talent for targeting helpdesk staff, manipulating them into resetting passwords and providing that crucial first foothold.”

In practice, malicious actors get an employee’s username and password by social engineering, phishing deceptions, password cracking, or buying stolen credentials on the dark web. Attackers then bombard employees with automated MFA notifications in rapid succession requesting the second authentication factor. If the employees check the source of the request, the ruse may be discovered. To avert that, attackers pressure employees to enter the authentication factor with a bogus claim of suspicious account activity or fake warnings that failure to reply immediately will lock the account. Employees pressed for time by other tasks may succumb to MFA fatigue and provide the second factor authentication — or disable the MFA system. That breach gives hackers a beachhead inside the company’s system.

In a new twist, hackers reportedly utilize GenAI tools to enhance these attacks. They create fake images of a human target, use those images to synthesize a fake government-issued ID, use the images again to generate a deepfake video to deceive a facial recognition authentication system, and use the resulting credentials to open a fake account that can enable the attacker to impersonate the target to co-employees.

Lessons Learned: Despite improvements in cybersecurity tools enabling reinforced authentication (such as dual-factor processes), human lapses remain a weak point. Hackers continue to exploit human behavior using phishing and other social engineering tactics to collect pre-attack intelligence, which is often the key to successful intrusions. AI makes the exploits worse:

Voice phishing . . . is shooting up the social engineering charts . . . . The reason . . . is AI, which is not only able to write highly convincing fake emails but also creates voices with perfect local accents that can work off a script. Say goodbye to the supposed Nigerian princes offering you a cut to their millions, say hello to your chief executive asking you to pay an invoice to a new creditor.

MFA is effective when users adhere to protocols. Regular training should reinforce rules and rationale and use examples of MFA fatigue attacks, including GenAI enhancements. Once attention to security protocols wanes, security lapses follow, letting attackers turn MFA tools against users. Authentication by humans is necessary, but it’s also where human error seems to be prevalent and where hackers persist in breaching company cybersecurity. We see another AI twist in this trend in the following discussion of authentication breaches in financial transactions.

III. Misappropriations of Electronic Fund Transfers

On September 3, 2024, the FBI issued an alert that malicious cyber actors in North Korea (DPRK) had “conducted research” on “targets connected to cryptocurrency exchange-traded funds (ETFs).” The alert warned that the “research” included “pre-operational preparations suggesting North Korean actors may attempt malicious cyber activities against companies associated with cryptocurrency ETFs or other cryptocurrency-related financial products.”

The alert described ways that DPRK cyber actors routinely impersonate individuals a target employee may know personally or indirectly, and it provided a bullet-point list of indicators of possible DPRK cyber actors at work. Examples included:

* “Insistence on using non-standard or custom software to complete simple tasks easily achievable through the use of common applications (i.e., video conferencing or connecting to a server).

* Requests . . . to enable call or video teleconference functionalities supposedly blocked due to a victim’s location.

* Requests to move professional conversations to other messaging platforms or applications.”

The alert recommended mitigations to lower risks from DPRK’s “advanced and dynamic social engineering capabilities,” including:

* “Develop your own unique methods to verify a contact’s identity using separate unconnected communication platforms. For example, if an initial contact is via a professional networking or employment website, confirm the contact’s request via a live video call on a different messaging application.

* Do not store information about cryptocurrency wallets — logins, passwords, wallet IDs, seed phrases, private keys, etc. — on Internet-connected devices.”

These recommended mitigations require company personnel at all levels to become more engaged in verification and resist shortcuts and the temptation to opt for convenience over security protocols.

On January 14, 2025, the governments of the United States, Japan, and South Korea issued a joint statement (hereinafter “Joint Statement”) to the “blockchain technology industry” warning that DPRK’s advanced persistent threat actors, including Lazarus Group, were conducting cyberattacks to “steal cryptocurrency and targeting exchanges, digital asset custodians, and individual users.” The Joint Statement attributed to the DPRK five 2024 cyber thefts from crypto companies totaling over $650,000,000.

Despite the warning, on February 21, 2025, DPRK hackers executed the largest crypto heist to date — stealing $1.5 billion in Ethereum tokens from ByBit, a Dubai-based cryptocurrency exchange. The breach vector was, again, the point of authentication.

ByBit protected most of its cryptocurrency holdings by using “cold storage” — wallets disconnected from the internet. To enable fund transfers, ByBit moved some cryptocurrency from the cold wallet to an internet-connected — “warm” — wallet using an authorization protocol requiring multiple signatures. “A multi-signature wallet is a type of cryptocurrency wallet that requires multiple signatures, instead of just one, to execute each transaction.”

ByBit used technology from SAFE to implement its multi-signature (“multisig”) wallets. Hackers phished a SAFE developer and obtained credentials enabling access to source code for the web-based user interface ByBit employees would see in its multisig process. When ByBit made its cold-to-warm wallet transfer, the details looked right to ByBit employees. They “blind-signed the messages without carefully checking their contents, trusting what the Safe Web3 Application displayed.” It appears there was no practical way for users to check the wallet details and detect that the ByBit cryptocurrency was redirected to a hacker’s account. When ByBit’s CEO authorized the requested transfer, $1.5 billion in Ethereum tokens moved not as the requester instructed, but to the hackers’ wallet.

Lessons Learned: Proponents of cryptocurrencies claim they’re secure. The ByBit heist illustrates that social engineering risks remain and attackers can use them to circumvent security tools. Furthermore, humans cannot be an effective check when a transaction process, such as blind signing, locks the humans out of the loop. Security tools depend on operational security, i.e., skill, practices, habits, and attention of the people that use them. The lamentable lapse continues to be not at the moment of breach, but in the preceding months when humans did not train and practice operational security to detect social engineering stratagems and deficiencies in security procedures. These lessons are decades old, as are the failures to learn and apply them.

IV. “Why Do Lawyers Keep Using ChatGPT”?

When tech writers call out lawyers as a group repeatedly duped by GenAI, it’s worth revisiting the missteps and reinforcing the lessons we’ve covered in previous Survey articles. Lawyers are not the only professionals at risk of GenAI inaccuracies. That being said, justice depends on lawyers presenting laws and relevant facts accurately.

Three recent cases illustrate (again) the risk of misunderstanding or misusing GenAI in legal practice: Wadsworth v. Walmart, Inc., Kohls v. Ellison, and Lacey v. State Farm General Insurance Co. Each case takes the story of GenAI “hallucinations” in a new direction. “Hallucination” is now commonly used to refer to AI-generated false or misleading content. “Hallucination” and alternatives like “confabulation” imply that AI tools think, reason, and discuss like humans. They do not. Anthropomorphizing AI tools can exacerbate the risk of overreliance and mislead people into thinking GenAI errs in the ways humans do.

Wadsworth. In 2023, we wrote about lawyers misunderstanding ChapGPT as a “super legal research” tool. The lawyer missteps in Wadsworth appear similar: a litigator submitted a motion citing GenAI-hallucinated authorities (eight cases out of nine), and the court noticed “peculiar” language explaining the applicable legal standard. The new twist is this tidbit: “Our internal artificial intelligence platform ‘hallucinated’ the cases in question while assisting our attorney in drafting the motion in limine.”

Responding to the judge’s order for a thorough explanation, the lawyers disclosed:

* the lawyer who wrote the motion uploaded a draft to his firm’s GenAI tool and prompted it to add more case law and an argument his prompt described;

* it was the first time the lawyer used GenAI that way;

* his firm had provided training, but he didn’t recall the training details; and

* he relied on the GenAI tool, did not verify the output, and did not know the term “AI hallucination.”

Responding to the incident, the firm cautioned all its attorneys about “hallucinations,” added a warning pop-up at the start of each GenAI session, and undertook to strategize additional safeguards and training. The response suggests weaknesses in implementing the GenAI tool, which the firm introduced less than three months before the incident. The lead trial counsel explained:

[The lawyer who drafted the motion] has indicated that he misunderstood our internal A.I. support and mistakenly thought it was fully capable of researching and drafting briefs[.] . . . However, our A.I. tool was never intended to be relied on to research and draft any legal filing without verification of legal authorities. It was intended to be used to help a lawyer perform, not to replace the lawyer’s performance.

Lessons Learned: GenAI tools designed for lawyers, including those proprietary to law firms, are not foolproof. They are susceptible to the same limitations as general purpose GenAI chatbots. Lawyers must manage GenAI tools understanding their limitations. Most importantly, they must carefully review and verify the output they use to inform legal strategies and submissions to courts, other tribunals, and administrative agencies.

Although this section discusses court decisions, the same risks attend unverified use of GenAI output in other tribunals. In fact, to the extent arbitrators, mediators, and administrative agency staff have less support from clerks or scrutiny by opponents, the potential for error based on “hallucinated” authority could be greater in these settings.

In 2023, we noted that GenAI terms of use warn about its limitations. Periodically reviewing terms of use and product documentation remains sound practice. Those documents are a guide to a technology’s risks and limitations, helping lawyers spot and avoid AI missteps.

Kohls. Lawyers are not the only people in courtrooms offering hallucinated material to the court (and potentially to juries). The plaintiffs in Kohls challenged the constitutionality of a Minnesota statute regulating AI-generated political content. Minnesota’s Attorney General filed expert declarations supporting the restrictions. The plaintiffs challenged as “unreliable” the declaration of “an expert on ‘misinformation and deepfakes'” because he included three inaccurate AI-generated citations. The plaintiffs argued (successfully) that citation “to a completely fictitious study fall [sic] well below” the evidentiary standard for expert testimony. The expert corrected his declaration, asserting the incorrect citations did not alter the substance of his opinion, which was supported by material correctly cited elsewhere in the original declaration. He also admitted that he did not follow his usual practice of validating his citations with reference software he uses when writing academic articles.

The court did not fault the expert for using AI for research. The court faulted him for failing to exercise the degree of care a declaration made under penalty of perjury merits:

It is particularly troubling to the Court that Professor Hancock typically validates citations with a reference software when he writes academic articles but did not do so when submitting the Hancock Declaration as part of Minnesota’s legal filing. One would expect that greater attention would be paid to a document submitted under penalty of perjury than academic articles.

As the court reminds in Kohls, lawyers bear responsibility for the veracity of evidence they put before judges and juries. Accepting that the Attorney General’s team were unaware of the fake citations, the court nevertheless quoted Rule 11 as “impos[ing] a ‘personal, nondelegable responsibility’ to ‘validate the truth and legal reasonableness of the papers filed’ in an action.” The Order continues:

The Court suggests that an “inquiry reasonable under the circumstances,” Fed. R. Civ. P. 11(b), may now require attorneys to ask their witnesses whether they have used AI in drafting their declarations and what they have done to verify any AI-generated content.

Lessons Learned: The court’s suggestion is sound practice advice, applicable not only to expert reports, but also to statements and documents created by clients, witnesses, and third parties compelled to produce documents. Lawyers’ standard lists of questions for clients and witnesses should cover use of GenAI to prepare materials provided by them. Screening questions might include identifying if and how witnesses or client representatives used GenAI and for what purpose. From there, lawyers can decide whether to take additional steps to validate the GenAI-assisted submission, such as verification by interviews and reviewing source materials.

The Judiciary of the United Kingdom also provides useful guidance, which discourages relying on GenAI for legal research:

AI tools are a poor way of conducting research to find new information you cannot verify independently. They may be useful as a way to be reminded of material you would recognise as correct.

Lawyers should recognize that GenAI is widely used by individuals and organizations. GenAI tools can be used effectively for improving the fluency of written material and making routine activities like notetaking and summarizing documents more efficient. Lawyers should get to know firsthand how GenAI tools work so that they develop a sense of GenAI’s strengths and weaknesses. For example, use a GenAI tool to write a letter or brief as a lawyer training exercise. (Make up the facts; do not use confidential client information.) Choose a legal issue that’s familiar; people tend to defer to technology when they are less knowledgeable about the subject matter. Then verify the facts and arguments and assess the quality of the logic and tone. Review GenAI output as if it were the work of a legal assistant or first-year lawyer, not a leading expert.

Lacey. This case drew attention because the Special Master hearing discovery disputes admitted he was almost duped by AI-generated cases. The errant law firms were co-counsel; one firm used GenAI tools in drafting a brief; the other didn’t inquire about AI. Neither firm checked the cited cases. When the Special Master inquired about two citations, counsel corrected those he called out but did not address others in the brief. Instead, counsel confirmed citations “had been ‘addressed and updated,'” and did not disclose AI use. After further inquiry and hearings, the Special Master concluded the lawyers involved “acted in a manner that was tantamount to bad faith,” and, in addition to monetary sanctions, he declined to award any of the discovery relief sought by those firms for their clients.

The Order ended:

A final note. Directly put, Plaintiff ‘s use of AI affirmatively misled me. I read their brief, was persuaded (or at least intrigued) by the authorities that they cited, and looked up the decisions to learn more about them — only to find that they didn’t exist. That’s scary. It almost led to the scarier outcome (from my perspective) of including those bogus materials in a judicial order. Strong deterrence is needed to make sure that attorneys don’t succumb to this easy shortcut.

Judges are, by now, aware of lawyers’ GenAI missteps, and the corresponding risk that their own reputations and credibility could be tarnished by decisions relying on “hallucinated” authorities. Beyond that, misleading courts with non-existent AI-generated authorities presents a systemic issue for common law systems like ours. Cases convincingly argued based on fictitious precedent adds to the burden and cost of ferreting out the “real” law and gives the public another reason to distrust the legal system.

Lesson Learned: “Hallucinated” authority poisons the law with misinformation. If GenAI output is not verified and fictitious authorities rooted out before decisions are made and reported, we risk miscarriage of justice in the directly affected case and undermining public trust in our common law system.

V. Accountability When Bad Things Happen with GenAI

During the Survey year, a federal district court in Florida granted in part and denied in part a motion to dismiss claims brought by parent survivors of teen suicide against AI chatbot developers. The ruling is significant because it begins to sort through theories of liability for injury allegedly caused by GenAI gone bad. The court allowed plaintiffs to proceed on claims alleging product liability, negligence, failure to warn, violation of Florida’s deceptive and unfair trade practices law, and unjust enrichment. The court concluded it had personal jurisdiction over individual developer defendants, and ruled that claims could proceed against Google as a component manufacturer and for aiding and abetting tortious conduct. It rejected defendants’ argument that the chatbot’s output was speech protected by the First Amendment. Since the court’s order, various organizations have entered appearances as amici to support one side or the other.

It is too early for lessons learned here. There has been much discussion, but no final resolution, of laws regulating artificial intelligence in the United States or, alternatively, freeing it from regulation. Cases like Garcia remind us we must not lose sight of longstanding theories of legal liability and how they apply when AI is a cause of, or contributing factor in, death, injury, or property damage. Whether accountability will land on developers or deployers of artificial intelligence (or some combination of them), we shouldn’t expect businesses and consumers harmed by AI to be left without recourse for their losses.

VI. Conclusion

Lawyers, as AI users and as advisors to developers, deployers, other users, and policymakers, should approach AI recognizing it’s not all good or all bad. Responsible AI use entails investigating and understanding the capabilities, limitations, and risks of AI tools in the context in which they are deployed and considering how liability for bad outcomes should be allocated to provide meaningful relief for harms not avoided.

Verifying GenAI output before using it in any work product is a baseline practice for lawyers. As models’ capabilities improve, users may “be less likely to challenge or verify the model’s responses” and rely on AI tools when they should not.

Careful attention to human factors remains essential for cybersecurity. Vulnerability to social engineering remains a weak link, and implementing security technologies without operational security sets the stage for lapses with catastrophic consequences. The need to limit our lapses will grow as GenAI’s rapidly improving capabilities reach the hands of malicious actors.

Read more on American Bar Association

This news is powered by American Bar Association American Bar Association

Share this:

  • Share on X (Opens in new window) X
  • Share on Facebook (Opens in new window) Facebook

Like this:

Like Loading...

Related

MAGACOIN FINANCE Named 2025’s Hidden Gem — Best Crypto Presale With 100x Upside Potential
Pepe Weakens Further While Traders Accumulate Rollblock With An Eye On A Projected 20x Rally – Crypto Economy
Cheesecake Labs Celebrates Five Years as Go-To Stellar Integration Partner, Announcing Official Ambassador Role
ICE Announces Strategic Investment In Polymarket
Best Crypto Presale With 55x ROI Forecast — MAGACOIN FINANCE, XRP, and Polygon (MATIC) Named Top Picks

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Email Copy Link Print
Previous Article Ripple tweaks its tech for lurking stablecoin wave
Next Article Bitwise Donating $233,000 from BITB Profits to Bitcoin Open-Source Developers
© Market Alert News. All Rights Reserved.
Welcome Back!

Sign in to your account

Username or Email Address
Password

Prove your humanity


Lost your password?

%d