
In the era of AI agents and large language models (LLMs), financial analysts are no longer just human. I decided to explore DeepSeek-R1, a distilled 8B parameter model based on LLaMA-3, to see how well it performs financial analysis — not through chat, but through code generation and critical thinking.
But there’s a twist: DeepSeek-R1 doesn’t support function calling or structured output. So, could it still reason effectively and produce accurate Python code for market analysis?
Let’s dive into the experiment.
Since the DeepSeek API was down, I used Replicate to run the model and Ollama for local testing. The task? Feed prompts to DeepSeek-R1 and parse its output into executable code for:
Implementing and evaluating a momentum trading strategy
> “Detect today’s date, fetch Tesla’s historical data from the beginning of this month to today, and analyze last month’s prices.”
What followed was a remarkable “thinking” monologue by the model:
Clarified conflicting time ranges (“fetch this month” vs. “analyze last month”)
Wrote statistically sound analysis metrics: mean, min, max, and volatility.
It was as if DeepSeek-R1 was reasoning aloud like a junior quant.
📊 The Generated Code (Snippets)
# Calculate last month’s date range
prev_month_end = current_month_start – datetime.timedelta(days=1)
prev_month_start = prev_month_end.replace(day=1)
# Fetch and analyze
last_month_data = tesla.history(start=prev_month_start, end=prev_month_end + datetime.timedelta(days=1))
The final output included average price, daily return volatility, and appropriate edge case handling (like empty dataframes).
—
Things got more exciting when I asked DeepSeek-R1 to implement a trading strategy based on price momentum. Here’s what it did:
It even shifted the signal by one day, aligning the theoretical signal with realistic execution.
While the returns were negative (blame market timing), the strategy logic and signal plotting were correct — remarkable for a model with no memory or access to tools.
—
Even without structured output or tool use, DeepSeek-R1 demonstrated:
This “thinking-in-code” behavior makes DeepSeek-R1 a great candidate for offline agent loops, especially when paired with a code executor.
—
LLMs don’t need tool access to think like analysts — they just need the right prompts.
“Show your work” prompting can unlock logical reasoning and better code quality.
Momentum strategies are a great testbed for evaluating LLM financial reasoning.
Interpretability is the unexpected advantage: you can “watch” DeepSeek-R1 build its logic.
This could be your lightweight, LLM-powered quant research notebook — running locally or in the cloud.
—
DeepSeek-R1 may not support agents yet, but it sure thinks like one. And that’s a step closer to LLMs as your new quant research partner.
—
Liked this post? Follow for more on LLMs in finance, agentic systems, and quant research automation.

