AI consulting services — Mike ONeal, San Francisco

AI Consulting — San Francisco Bay Area San Francisco Bay Area & Remoteamp; Remote

Most AI projects fail.
Here's why mine don't.

87% of AI projects never make it to production. The difference isn't the model — it's whether the person building it understands the engineering underneath. I've been building software for 25+ years. AI is the latest layer, not the only one I know.

Why most AI consulting engagements fail

The consultant knows AI but not engineering

They can prompt an LLM and build a demo. But they can't architect a system that handles errors gracefully, scales under load, integrates with your existing tools, or runs reliably without someone babysitting it. The demo works on stage. It breaks in production.

They deliver a strategy document, not a working system

You pay $50k for a 40-page PDF that says "implement RAG pipeline" and "fine-tune a model for your domain." Then your team has to figure out how to actually build it. The strategy was never the hard part — the implementation is.

They solve the wrong problem

Not everything needs AI. Sometimes the bottleneck is a bad database query, a missing API endpoint, or a process that could be fixed with a bash script. An AI consultant who only has a hammer sees every problem as a nail. I've been solving engineering problems since before AI was commercially viable — I know when the answer isn't AI.

Don't take my word for it. Read the case study.

ClaimHawk is the system I built from scratch — custom AI models, autonomous pipelines, running in production across dental practices. 67% fewer denials, 4x faster payments. It's the best proof that my AI consulting approach works.

Read the ClaimHawk case study

How an AI consulting engagement works

Step 1

Discovery call — find the real bottleneck

You tell me what's slow, what's expensive, or what's not working. I ask questions until I understand the actual problem — not the symptom, not the feature request, the root cause. Half the time, the solution is simpler than what you expected. Sometimes it's not AI at all.

Step 2

Build a proof of concept — in days, not months

I build a working prototype against your real data in your real environment. Not a sandboxed demo with curated inputs. If it's going to break, I want it to break now — when it's cheap to fix — not after three months of development.

Step 3

Productionize — make it reliable, not just clever

Error handling. Fallback paths for when the AI gets it wrong. Monitoring and alerting. Cost controls so your LLM bill doesn't surprise you. Human-in-the-loop review for high-stakes decisions. This is the phase where most AI projects die — and it's where 25+ years of engineering experience matters more than knowing the latest model release.

Step 4

Handoff — your team owns it

I document everything, train your team to maintain and extend the system, and set up runbooks for common issues. You own the source code. There's no vendor lock-in, no ongoing dependency on me. If you want me to stick around for ongoing optimization, great. If not, the system keeps running.

Questions companies ask about AI consulting

We already tried AI and it didn't work. Why would this be different?

Most AI implementations fail because they were built by someone who knows AI but not production engineering. A model that works in a notebook doesn't work in a pipeline that handles errors, retries, and edge cases. I build the entire system — not just the AI part. That's why it works.

How do you handle sensitive data?

I default to local/private models (Qwen, Llama, Mistral) for anything sensitive. When cloud APIs make sense, I set up data processing agreements and ensure PII is stripped before it hits any external service. For healthcare clients like ClaimHawk, I work within HIPAA requirements. Your data stays yours.

What if AI isn't the right solution for our problem?

Then I'll tell you that on the discovery call and save you the money. I've been solving engineering problems for over two decades — I know when a scheduled script, a better database index, or a simple integration will solve your problem faster and cheaper than AI. I don't need to sell you AI to justify my existence.

How much will our LLM API costs be?

I design for cost from day one. That means using smaller/local models where quality allows, caching aggressively, batching requests, and routing only the hard problems to expensive models like GPT-4 or Claude Opus. Most production AI systems I build run on $100-500/month in API costs, not the $10k+ horror stories you hear about.

Do you do AI training for non-technical teams?

Yes. I run workshops for both technical and non-technical teams. For engineers, it's hands-on: building AI pipelines, prompt engineering at scale, and integrating models into existing codebases. For business teams, it's practical: which tools to use, how to evaluate AI vendors, and how to spot when a contractor is overselling AI capabilities. Custom curriculum, always.

Tell me what's slow, expensive, or broken. I'll tell you whether AI can fix it — and if it can, how fast.

Book a discovery call