Stop Automating the Unknown: A Practical Playbook for Process Automation That Actually Delivers
How to skip the 95% of business that fail to get ROI from their AI
Most automation programs don’t fail because of the technology. They fail because they try to automate messy, unclear processes and data, and then hope for a miracle. The result is a tangle of bots, rules, and half-finished pilots that don’t make it to production. In this article we try to lay out a clear, practical path to process automation, covering RPA (Robotic Process Automation), workflow orchestration, low-code solutions, and AI agents. And the relevant data, governance, and impact considerations.
A few key items from the State of AI in Business 2025 report:
- 95% still see no GenAI ROI. If it doesn’t fit your workflow, it will not give returns (80%+ tried, 40% deployed, ~5% deliver).
- Stop chasing top-line sparkle. The fastest wins are in the back office, even though 50–70% of budgets go to marketing & sales.
- Want to actually deploy? Buy before you build, partnering first is roughly twice as likely to reach production.
- Shadow AI is already here. Most staff use personal LLMs while only 40% of firms have official arrangements; channel it, don’t fight it.
RPA (Robotic Process Automation)
RPA automates human-like actions in software (clicking, typing, copying from one window to another). It works well when tasks are high-volume, rule-based, and very stable (for example, posting invoices or matching purchase orders). It struggles when screens change, when the process updates, data input is unexpected, or when a task needs judgement rather than strict rules.
Workflow orchestration
This is the “air traffic control” of processes. It links automation steps across systems (WEB, CRM, ERP, LLM). It handles approvals, service-level agreements, retries, and error handling. RPA automates a step; orchestration keeps the whole end-to-end journey on track.
Low-code solutions
Low-code platforms let teams build small apps and flows with minimal coding. They are great for speed and for filling gaps (simple forms, routing, notifications). But without governance, low-code can create many one-off apps, data copies, and shadow systems that are hard to maintain. It is flexible and can handle workflow tasks, but requires low-code developement.
Artificial Intelligence
Systems that use LLMs, memory, and tools (APIs, databases, emails, bots) to work towards a goal. Agents don’t just answer questions, they take actions (fetch data, write an email, update a record, or trigger a payment). To be useful in business, agents need guardrails, clear instructions, well-defined inputs/outputs, and a way to escalate to a human when unsure.
Choosing the first use case sets your momentum. Intellifold’s Use Case Benefit Assessment scores candidates by:
- Lack of transparency. People can’t see the real process. They rely on process maps and swimlanes from years ago, not how work really goes today. Hidden rework, loops, and workarounds will kill ROI.
- Messy or missing data. Poor master data, inconsistent fields, and untrusted logs make automation brittle. LLMs need clear, clean inputs to produce reliable outputs.
- Siloed ownership. Finance, procurement, and operations optimise locally. Without shared goals and a common view, automation becomes patchwork.
- Automate before process thinking. Teams jump to RPA or an AI agent without clarifying steps, inputs, and outputs. The fancy toy over real business benefit.
- RPA fragility. RPA is great for deterministic steps. A small screen change or new popup can break a bot. Without testing and monitoring, maintenance costs spiral.
- LLM unpredictability. Language models can misread a task if instructions are unclear or data is incomplete. Clear decisions paths and exception handling are key.
- Tool access and memory. Without secure tool access and designed memory to remember context, agents stall or repeat mistakes.
- Governance gaps. Shadow AI use is everywhere, unclear rules for use and data considerations create massive risks, especially in regulated industries.
- No proof of impact. Without a baseline and after-go-live KPIs to measure what works and doesn't, the business case remains guesswork.
The sweet spot is not picking one tool. It’s combining them.
- Data foundations. Start with clear definitions and data ownership and establish a single source of truth (maintenance in one spot). Then assess where data quality improvements are required through input controls (application & SOP) and where clean up actions are needed to ensure reliable use.
- Process mining. Instead of automating based on guess, use Process Mining to understand the exact data volumes and process variations. It's suprising how often a budget for automation is used for fancy scenarios over proven business impact. It also establishes your baseline so you can prove impact later.
- Workflow orchestration. Define the end-to-end flow with clear states, retries, and exception paths. These products are great when processes are understood. They form the technology backbone of your automation journey.
- RPA as the adapter. RPA has lost its appeal with AI agents claiming the spot. However, RPA is super powerful when the input and actions are the same or APIs for connectivity are missing. In combination with workflow orchestration they have their niche.
- Large Language Models. ChatGPT, Gemini, Claude etc. There are many different providers and while there are some differences in how these models models were trained. The decision to use an instant (min), thinking (reasoning), or research model is much more important. It can really impact user experience and token costs.
- Memory architecture. Retrieval-Augmented Generation (RAG) dynamically retrieves new/contextual information per query, typically from vector database. It works well for wide knowledge domains with changing information. Cache-Augmented Generation (CAG) preloads a fixed knowledge base into the model. It generates responses using already cached data, improving speed and reducing resource usage. An emerging variant like Knowledge-Augmented Generation (KAG) integrates structured knowledge stores into the generation process and works well with structured data and domain-specific reasoning and applications. Assess the volume and change rate of required knowledge and make the relevant choices for short term (conversation context, the current case) and long term (facts, policies, etc.).
- Tool access. Give the agent specific tools with least-privilege access: read vendor record, post journal entry, create ticket, send email. Always create a separate account per agent to minimise risk and monitor use. A MCP (Model Context Protocol) server can help agents to interact seamlessly with external tools, databases, and services. They use standard JSON formats for instructions and communication. Consider specific vendor MCPs or broader use ones, including the governance requirements.
- Evaluation and guardrails. Create automated checks on outputs: required fields present, values within bounds, policy references included, confidence above threshold. Route low-confidence or unusual cases to a human.
![[interface] screenshot of cybersecurity dashboard interface (for an ai cybersecurity company)](https://cdn.prod.website-files.com/6861eb5cbc40991900a5ee95/68933dfa5777c050c0d4f27a_ProcessAutomationTools.png)
Automation and AI need strong, simple rules everyone understands.
- Data governance. Define what data is used, who can see it, how long you keep it, and how you fix errors. Apply a “data minimisation” policy, and embed controls across the lifecycle. Transpararency and explainability remain key with the right human oversight. o See AI Governance & Control framework for controls to consider.
- Privacy by design. Keep personal data out of public tools. Adhere to rules such as the APPs (Australian Privacy Principles) or GDPR (General Data Protection Regulation) and the principles behind the EU AI Act., by removing, redacting, or anonymising the sensitive. Trust is the competitive edge.
- Model and agent governance. Decide which decisions an AI can make, and which must go to a human. Keep an audit trail with inputs, tools used, outputs, and approvals.
- Security. Use single sign-on (SSO), multi-factor authentication (MFA), network controls, and least-privilege roles for bots and agents.
- Operational risk. If you’re regulated, align with risk standards and third-party management expectations. With transparancy, these can prove good use cases.
- Shadow AI use. Staff already use personal AI tools. Offer company approved options with clear rules and controls. It’s better to channel demand than to pretend it isn’t happening.
Before you dive head-first into building your future RAG system or AI agent, focus on:
- Clarity on what’s working vs what’s broken and why
- Departments that share common goals and talk to each other
- An ambitious but realistic roadmap with quick wins and long-term goals
Process mining can support. It reconstructs the actual process execution from time-stamped event data in ERP, CRM, and workflow systems. You’ll see how long steps take, where rework happens, what data volumes are invovled, and which paths break the rules. This lets you:
- Select the right use cases (high volume, clear rules).
- Fix the root causes (poor data, unclear paths) before automating.
- Quantify the benefits (cycle times, cost per transaction, users involved) and set clear targets.
- Monitor after go-live to prove the impact and catch drift.
- Start in the back office. Prioritise the more standarised and high-volume work over fancy implementations. Accounts payable, order management, vendor onboarding, claims triage, payroll queries. These areas offer clear, have better data, and deliver faster payback.
- Understand your business processes: this makes knowing where to start so much easier. It also makes training the agent much simpler. Knowing what activities to perform and what input is to be considered and action to be taken. AI doesn't fix broken processes, fix them first.
- Build the business case: AI isn't always the answer. There are alternatives and sometimes it isn't even automation clients need, just some changes in how they work, a setting change in the system, and some training to make people understand. For where there is a business case and I mean ROI for the automation investment, it should be very clear what will be achieved before building. Track KPIs before & after AI implementation.
- Use reliable data: Rubish in, garbage out. Most people get this, but any unclear input can really trow off the performance of the AI. It's incredible what consequence a little bit of trash data can have. Maintain clean data and establish data governance.
- Agents are there to automate: In order to take action they need integrations with systems where data is captured and processed. You can give it multiple actions to take, but keep it limited and make sure the decision paths are clear. The determination of what to do and when to ask for additional input the Agent can handle.
- Nail the basics. Define the steps, inputs, and outputs. Write sample prompts and examples for AI. Remove avoidable variation (for example, standardise reasons codes, templates, and forms).
- Design for exceptions. Decide early what happens with missing fields, conflicting data, or edge cases. Don’t let “one weird case” stall the flow.
- Test like you mean it. For RPA, use versioned selectors and regression testing. For agents, run evaluation sets and track accuracy before you go live.
- Measure relentlessly. Track Key Performance Indicators (KPIs) before and after: cycle time, cost per item, % automation, first-time-right, exception rate, compliance rate. Share results in the open.
- Data Governance & Explanability: Establish what data is considered, what decisions it can take and what not, what level of human oversight is needed. And very importantly record the output of individual steps. This is essential to find the root cause of wrong results and helps to provide comfort and asnwer audit questions.
- Start Small, Scale Fast: Everyone wants the holy grail of fully automated autonomous processes. Not happening any time soon. Understand where impact will be significant and steps to automate are clear. Start with just one agent. Develop, test, deploy and make sure it's great before scaling and moving onto the next agent. Scaling is hard and adding agents increases complexity and the likelyhood of unwanted outcomes.
- Continuous tuning: Development is an iterative process and even after deployment the model needs to learn and be changed to correctly handle different scenarios. Agents do need maintenance. Make sure all scenarios are covered and anything outside the validated scenarios is directed back to humans.
✔ Use data, not assumptions, to validate automation opportunities.
✔ High-volume, rule-based tasks are usually the best starting point.
✔ Clearly define each step, input, and expected output before introducing AI.
✔ Use Process Mining to fix process issues and quantify before automating.
✔ Clearly define expected benefits, cost savings, and efficiency gains.
✔ Track KPIs before & after AI implementation to see success in action.
✔ Start with one AI agent. Test, optimise, and expand gradually.
✔ Understand that scaling AI increases complexity.
✔ Provide clear, structured instructions with well-defined inputs and outputs.
✔ Reduce complexity—simple, defined tasks and examples improve AI accuracy.
✔ Establish data governance to maintain clean and structured inputs.
✔ Monitor for data inconsistencies that could break AI logic.
✔ Connect AI to ERP, CRM, workflow systems and other tools for real automation.
✔ Define clear decision paths to follow and how tools are used under the scenario.
✔ Focus on clear instructions, examples, model refinement, and high-quality data.
✔ Establish what decisions AI can make, and what requires human oversight.
✔ Fix poorly designed workflows before blaming the AI.
✔ Regularly evaluate and, where needed, retrain models to improve accuracy.
✔ Log output from every AI decision step to ensure auditability & compliance.
A few key lessons are captured in our AI Process Automation - 10 Critical Lessons Before Getting Started.
If you remember one thing, make it this: don’t automate the unknown. Build transparency and a clear business case first, then automate with a plan, clean data, and simple rules. Orchestrate the end-to-end flow, use bots where it makes sense, and give AI agents the tools, memory, and guardrails they need. Govern the whole thing with logging, data privacy in mind, and human oversight. And measure from day one so you can prove the impact.
At Intellifold Process Mining & AI, we help organisations with their automation journey. From the initial insight and business case through Process Mining, to the roadmap and considerations for success. From there, we design and deliver the right mix of RPA, orchestration, low-code, and AI agents. This is not about technology. What matters is results. Fewer surprises, faster payback, and real movement on your P&L.
If you’re ready to apply automation, stop guessing and start with visibility. The rest of the journey becomes much easier. Book a call to discuss your goals.