Playbook
Indianapolis enterprise AI: the playbook for getting your first internal pilot live in under 30 days
April 29, 2026
Indianapolis has more Fortune 500 and Fortune 1000 headquarters per capita than most US metro areas get credit for. Eli Lilly, Cummins, Salesforce's regional hub, Corteva, and a cluster of mid-market manufacturers in the surrounding counties run significant engineering operations here. Most of them are two to three years into AI conversations and six to twelve months into actual budget commitments. A smaller number have a live internal tool.
The gap is not budget. The gap is not technical talent. The gap is almost always the same thing: nobody has taken a specific, small workflow, stripped it down to its data inputs and its decision output, and shipped the smallest version that produces a useful answer.
This is the playbook I use when a company is ready to move from "AI strategy" to "AI tool running in production."
Why 30 days is the right target
Longer timelines on internal AI pilots tend to die from scope expansion. The initial request is "build us an AI tool for procurement analysis." Six weeks in, it is "build us an AI tool for procurement analysis that integrates with SAP, handles all spend categories, and produces a board-level summary." The original problem that justified the project gets buried under requirements that nobody validated.
Thirty days forces a choice. You can ship something real in 30 days if you pick one workflow, one data source, and one decision the tool supports. Or you can plan a larger project and ship nothing. The 30-day constraint is not arbitrary; it is a forcing function that keeps the first pilot small enough to succeed and clear enough to evaluate.
The Indianapolis market has a specific version of this problem. Midwest manufacturers and companies in regulated industries are not early adopters by culture. They have seen AI vendors promise transformation and deliver slides. The 30-day pilot is the credibility play: something real, something you can measure, something you can point to when the next vendor walks in.
Stage 1: pick the workflow
The right workflow for a first AI pilot has three properties.
First: it is currently done by a person reading a document or a screen and producing a written output. Variance commentary, supplier evaluation summaries, ticket classification, spec doc review, incident report drafts. If the output is a number produced by a formula, AI is not the tool. If the output is a sentence or a paragraph produced by a person reading source material, AI can help.
Second: it happens often enough to measure. If the workflow runs twice a year, you cannot evaluate the pilot in 30 days. The minimum useful frequency is daily or weekly.
Third, and the one most people underweight: the person doing it today will tell you honestly that it is tedious. Not "inefficient" or "not a good use of my time." Those are things people say in front of their manager. The honest answer is whether the person dreads it. Tedious, repetitive text synthesis is where AI produces the most immediate and measurable time savings.
Stage 2: pick the data
The fastest way to kill an AI pilot before it starts is to try to connect it to a live enterprise system in the first sprint. SAP integration, Salesforce data pulls, ServiceNow API access: each of these is a project by itself. The 30-day pilot does not touch live systems.
Instead, identify a representative data export. Most enterprise systems can produce a CSV or a PDF export of the data that feeds the workflow you identified in Stage 1. Pull two weeks of historical data. That is your pilot dataset.
Three questions to answer about the data before writing any code:
Does it contain PII that should not leave your network? If yes, the pilot either uses on-premises inference (Ollama on a local server, or Azure OpenAI with a data-handling agreement in place) or you anonymize the export before it goes anywhere. This is a 30-minute decision, not a 6-month compliance review.
Is it structured or unstructured? Structured data (spreadsheets, CSVs with consistent columns) is easier to work with but less common as the input to text-synthesis workflows. Unstructured data (PDFs, Word documents, email threads) is more common and requires a parsing step before inference. Both are solvable; unstructured just takes one more day.
How much of it is there? For a 30-day pilot, 100 to 1,000 representative records is plenty. More is not better. More means more time cleaning and less time shipping.
Stage 3: pick the model tier
The decision that determines the most about cost and data-handling requirements is which AI model the pilot uses. Three tiers matter for enterprise.
Tier 1: API-based cloud inference. Claude, GPT-4o, Gemini. Best output quality. Data leaves your network. Appropriate for non-sensitive data under a standard data processing agreement.
Tier 2: Cloud-hosted with data isolation. Azure OpenAI, Amazon Bedrock, Google Cloud Vertex. Model quality comparable to Tier 1. Data stays within your cloud tenant. Appropriate for moderately sensitive data. Adds 1 to 2 weeks of setup versus Tier 1.
Tier 3: On-premises inference. Ollama running Llama 3, Mistral, or Phi-3 on local hardware. Data never leaves the building. Model quality is lower than Tier 1 and 2 for complex reasoning tasks, but sufficient for most classification and short-form synthesis tasks. Appropriate for highly sensitive data where no cloud option is approved.
For most Indianapolis mid-market pilots, the answer is Tier 1 with standard data handling. If the data is medical records or defense-contract related, Tier 2 or 3. The choice takes 30 minutes with the right stakeholders in the room.
Stage 4: ship the smallest version
The smallest version of an AI pilot that produces useful output is usually not a product. It is a script that reads an export file, calls the model, and writes a structured output to a spreadsheet.
That is on purpose. A script is auditable. The people doing the workflow today can look at the inputs and the outputs side by side and tell you whether the AI output is useful, wrong, or missing something the human always knew to check. That feedback loop is the most valuable thing the first sprint produces.
The product version comes in sprint two, after you have validated that the AI output is accurate enough on the real data to be worth a UI around it.
Here is the timeline I use on a typical 30-day engagement:
- Days 1-3: workflow and data selection
- Days 4-7: data export and parsing
- Days 8-14: model integration and first-pass output review with the workflow owner
- Days 15-22: prompt refinement and edge case handling
- Days 23-28: output quality review and delivery
- Days 29-30: handoff documentation and next-sprint recommendation
Why Indianapolis specifically
The talent pool here is different from San Francisco or New York in ways that matter for enterprise AI. Midwestern engineers tend to have deep operational knowledge of the industries they work in. The Cummins engineer who knows the procurement workflow is not a specialized AI consultant. The TIEM IT manager who understands the warranty tracking system is not a product manager. But they understand the data and the decision context at a level that a parachuted-in consultant from the coasts often does not.
The best AI pilots I have seen work because the technical person and the domain expert are in the same building, often the same team. Indianapolis has that pairing in manufacturing, logistics, pharma, and financial services. The AI capability gap is not as large as the self-assessment suggests. The harder gap is organizational: getting the right two people to agree on a 30-day scope.
Work with me
If your team is ready to run the 30-day playbook on a specific workflow, I can cover Stage 1 and Stage 2 in a single 60-minute session. Book a 30-minute introductory call and bring the name of the workflow you have in mind. We will have a scope and a data plan by the end of the first hour.