Product architecture
How Kam AI Is Built

Kam AI
Product and research

Product architecture

Kam AI
Product and research

Most AI betting products start with a promise:
Better picks.
Kam AI starts with a different question:
What has to be true before an answer deserves trust?
That question shapes the whole system.
Kam is not built as a pick feed. It is not built as a hype machine. It is not built to make every game feel bettable.
Kam is built as a sports-intelligence system.
It collects shared facts. It stores memory. It checks what changed. It explains why it matters. Then it helps the user decide what to do next.
The short version is simple:
External data comes in. Kam turns it into useful facts. The app asks for those facts. The agent explains them. The user keeps the final judgment.
Think of Kam as four parts.
The scheduler is the factory.
It collects sports data, market data, odds, results, props, news-like signals, and other shared facts. It does the heavy work before the user asks a question.
DynamoDB is the memory room.
It stores the facts Kam has already approved, normalized, cached, and made ready to use.
The API is the coach.
It looks at the user request, finds the right skill, loads the right memory, checks what tools are allowed, and prepares the answer path.
The app is the surface.
It shows the user the board, the watchlist, the saved ideas, the chat answer, the alerts, and the review loop.
The app does not guess from raw data.
The app asks the system for trusted objects.
That rule matters more than any single feature.
Kam has one core rule:
Facts first. Prose second.
That means the system should create useful decision objects before it creates nice language.
A line move is a fact.
An injury update is a fact.
A saved idea is a fact.
A market signal is a fact.
An outcome review is a fact.
The AI explanation comes after those facts exist.
This keeps Kam from becoming just another chatbot that sounds confident while making weak claims.
In sports betting, the hard part is not only finding data.
The hard part is knowing which data deserves attention.
A bettor can find a trend for almost anything. A bettor can turn one stat into a story. A bettor can confuse action with edge.
Kam is designed to slow that down.
The system does not start by asking, "What pick should we give?"
It asks:
What object are we looking at?
What changed?
Is the data fresh?
Is the signal strong?
Does this connect to something the user already saved, watched, or asked about?
Should the answer be bet, wait, pass, track, or review?
That is a different product.
It is not a list of picks.
It is a decision system.
Kam's product loop is clear.
Watch something.
Detect signal changes.
Save an idea.
Monitor what happens.
Review the lesson.
The watchlist is for things the user wants to monitor.
The portfolio is for ideas the user wants to track and learn from.
That difference matters.
The watchlist says, "Keep an eye on this."
The portfolio says, "This was my thesis. Now let's see what happened."
Over time, that creates something more useful than one-off answers.
It creates a decision journal.
Kam does not own the raw world.
External sources own their own payloads, gaps, delays, and mistakes.
Kam owns what happens next.
It owns normalization.
It owns caching.
It owns signal detection.
It owns watchlist relevance.
It owns saved ideas.
It owns outcome review.
It owns the explanation layer.
This is why freshness is part of the product.
If a number is old, Kam should say it is old.
If a source is missing, Kam should say it is missing.
If a signal is weak, Kam should say it is weak.
Trust comes from showing the state of the data, not hiding it.
The technical flow is simple on purpose.
External sources feed the scheduler.
The scheduler writes shared facts into DynamoDB and S3.
The Node and Express API reads those facts, applies product logic, and powers chat.
The React Native app renders the result.
That creates a clean boundary.
The app is not a data scraper.
The chat is not a guessing layer.
The scheduler is not a storyteller.
Each layer has a job.
When a user opens Kam, the system should already have useful facts ready.
When a user asks a question, Kam should load the best approved objects first.
Only after that should the agent explain the answer.
Kam stores product truth as intelligence objects.
These are not just paragraphs.
They are structured records with fields like facts, confidence, data quality, source references, coverage status, and generated time.
The object should still be useful even if the headline and summary are removed.
That is the test.
If the prose disappears, does the product still know what happened?
If yes, the object is real.
If no, it is probably just text wearing a product costume.
Kam has to avoid a common AI product failure:
The screen says one thing, and the chat says another.
That breaks trust fast.
So the docs define product read models.
The board, game detail, watchlist, saved ideas, notifications, and chat should all explain the same underlying truth.
They can format it differently.
They should not invent different facts.
If the board shows a line, chat should not explain a different line.
If the watchlist shows stale data, chat should not pretend it is fresh.
If a source is missing, both surfaces should respect that.
Kam chat is not meant to be one huge prompt with everything dumped inside.
The runtime has phases.
First, it cleans the request.
Then it loads memory.
Then it resolves the skill.
Then it builds context.
Then it plans tools.
Then it assembles the prompt.
Then it validates the answer path.
Then it streams the final answer.
Then it leaves a trace.
That trace matters.
If an answer fails, Kam should be able to inspect what happened and turn the failure into a replayable test.
That is how the system gets better.
Kam uses skills as reusable recipes.
A skill tells the system how to handle a certain kind of user need.
For example, board questions, my-bets questions, movement questions, trend questions, and watchlist questions should not all be handled by the same vague prompt.
They need different data, different checks, and different answer contracts.
The harness should stay thin.
The skill should carry the real procedure.
That makes the system easier to test, easier to debug, and easier to improve.
Kam can send useful alerts, but those alerts should not be decided by an LLM alone.
The docs make this clear.
The matcher should be deterministic.
That means Kam looks at the signal, the watched object, the user's preferences, the alert contract, quiet hours, and idempotency rules.
Then it decides whether to notify.
The LLM can explain the alert.
It should not be the thing that randomly decides the alert exists.
That is how Kam avoids noisy notifications.
Kam treats evals as product infrastructure.
That means quality is not checked only at the end.
The eval ladder checks smaller things first:
Can the system route the request?
Did it choose the right skill?
Did it plan the right tools?
Did the prompt include the right contract?
Did the answer use the same facts as the screen?
Did the trace replay still pass?
Did the final live answer help a real betting decision?
This matters because live end-to-end tests are useful, but they are late.
Kam needs smaller gates that fail close to the source of the problem.
Kam Ops is the internal control room for the agent system.
It is not end-user product UI.
It is not a second source of truth.
It exists to make the real sources easier to inspect.
It shows eval health, replay receipts, skill capsules, tool policies, and release blockers.
When a chat answer is weak, Kam Ops should help the team review it.
The path is direct:
Inspect the request.
Inspect the answer.
Label the failure.
Write the ideal answer.
Turn the miss into a contract, fixture, or replay eval.
Rerun the check.
That is how a weak answer becomes a stronger system.
Kam is not trying to be a high-frequency trading terminal.
It is not trying to be a sportsbook.
It is not trying to handle real-money transactions.
It is not trying to promise guaranteed outcomes.
And it is not trying to replace human judgment.
Kam is a research workspace, signal monitor, watchlist system, decision journal, and review loop.
The product should feel calm.
It should feel clear.
It should show what is known, what changed, what is stale, what is missing, and what the user can do next.
Kam AI is built around a simple belief:
Better decisions come from better structure.
Not louder picks.
Not more confidence.
Not more dopamine.
Better structure.
Shared facts before answers.
Memory before opinion.
Signals before explanations.
Contracts before creativity.
Evals before release confidence.
Review before scale.
That is the system behind Kam.
And that is the product promise:
Help serious bettors separate real edge from emotional noise by showing the truth of the decision before action is taken.
Facts first. Prose second.
The app should never guess from raw data.
Freshness is not metadata. Freshness is trust.
The LLM explains the signal. It does not invent the truth.
The watchlist is what you monitor. The portfolio is what you learn from.
A weak answer should not disappear. It should become a replayable test.
Kam is not a pick feed. It is a decision system.
Better betting decisions do not start with confidence. They start with clarity.