The Rise of AI Agents: How to Build Intelligent Agents with Ruby on Rails

You've probably heard of ChatGPT and chatbots, but are you familiar with AI agents? These intelligent systems represent the next step of artificial intelligence, able to act autonomously to accomplish complex tasks. In this article, we'll explore what AI agents are, how to build them with Ruby on Rails, and best practices for developing them ethically and effectively.

What is an AI Agent?

Definition and Key Concepts

Agentic AI refers to the idea that artificial intelligence possesses a form of autonomy — it can act without being constantly prompted, take actions to achieve a goal, and behave in a more human-like way.


The agents are the software systems that realize this vision. Unlike simple chatbots, they can:

  1. Act autonomously
  2. Make contextual decisions
  3. Understand and remember context
  4. Chain tasks without explicit programming


A typical agent comprises four essential components:

  1. An objective or a clear mission
  2. Tools for interacting with the environment
  3. A planner or orchestrator to coordinate actions
  4. A memory short- and long-term to maintain context

Agents vs Workflows: What's the Difference?

Do not confuse agents with workflows, even though the two concepts are related. A workflow follows predefined and rigid steps, while an agent adapts and learns. Imagine a workflow as a strict cooking recipe, and an agent as an experienced chef who can improvise depending on available ingredients!



Aspect
Workflow
Agent
Flexibility
Limited, based on rules
High, based on models
Learning
Minimal
Capable of learning and adapting
Handling contingencies
Weak
Excellent


Types of AI Agents


There are several categories of agents, each suited to specific use cases:


  1. Learning agents: They improve with experience, learning from each interaction and adapting their behavior.
  2. Utility-based agents: They optimize their actions according to a defined objective, using trade-offs to maximize overall performance.
  3. Model-based agents: They maintain an internal representation of the world to make decisions beyond immediate data.
  4. Goal-oriented agents: They work toward a defined goal and plan their actions accordingly.
  5. Simple reflex agents: They respond directly to stimuli without memory or complex planning.

Concrete Applications of AI Agents

AI agents find their place in numerous domains:

  1. Automated customer support: autonomous ticket management and problem resolution
  2. Security threat detection: monitoring and incident response
  3. Travel assistants: personalized planning and bookings
  4. Programming agents: code generation and automatic creation of pull requests


At GitHub, the Copilot team is developing agents capable of automatically creating PRs from issues, revolutionizing how we approach software development.

Building an Agent with Ruby on Rails

Architecture and Core Components

To build a robust agent in Rails, we need to implement several fundamental layers:


1. The Tools Layer with MCP


Model Context Protocol (MCP) is a standard developed by Anthropic that defines how AI models discover and call external tools in a secure way. It's like giving your agent a universal toolbox!


To fully understand MCPs and implement an MCP server with Rails, read the article How to design AI-Ready applications with Rails.



def setup_zendesk_integration
mcp_server = MCPServer.new(service: 'zendesk')
mcp_server.authenticate(api_key: ENV['ZENDESK_API_KEY'])
@agent.add_tool(mcp_server)
end


2. Memory Management


Long-term memory: Stored in a simple database table with role, content, and run_id. After each planner response, tool call, or user input, a row is added to agent_memories.


class AgentMemory < ApplicationRecord
belongs_to :agent_run
validates :role, inclusion: { in: %w[user assistant system tool] }
validates :content, presence: true
scope :recent, -> { order(created_at: :desc).limit(50) }
end


Short-term memory (State Store): A state store that serves as the single source of truth for the agent's current execution.


class AgentRun < ApplicationRecord
# Colonne JSONB pour stocker l'état de travail
store_accessor :state_data, :current_plan, :tool_outputs, :retry_count
def set_pointer(key, value)
self.state_data ||= {}
self.state_data["pointer_#{key}"] = value
save!
end
end


3. Agent Orchestration


Orchestration is the brain of your agent—it plans, executes, and supervises multi-step work:


class AgentOrchestrator
def process_next_step(run)
# Demander au planificateur la prochaine action
plan = request_plan_from_llm(run)
# Stocker le plan brut
run.workflow_steps.create!(
step_type: 'plan',
content: plan
)
# Vérifications des politiques et garde-fous
return false unless validate_plan(plan)
# Exécuter l'action via MCP
result = execute_tool_action(plan['tool'], plan['parameters'])
# Fusionner les résultats dans le contexte
run.merge_tool_result(result)
plan['action'] == 'finished'
end
end


4. Intelligent Planning


Planning turns an objective into a sequence of verifiable steps. We use four main tactics:


Sub-goal decomposition: split a large objective into smaller steps

Reflection: analyze results and decide on improvements

Self-critique: evaluate results against defined criteria

Chain of thought: internal reasoning about the next actions


def plan_next_action(context)
prompt = build_planning_prompt(context)
response = llm_client.complete(
messages: prompt,
response_format: { type: "json_schema", schema: action_schema }
)
JSON.parse(response.choices.first.message.content)
end


Best Practices for Building Agents

Modular and Maintainable Architecture

Agents are complex, tightly coupled systems. A modular architecture is crucial as the ecosystem evolves quickly — tooling strategies change every few months!

Safeguards and Policies

Implement essential checks:

  1. Access policy controls
  2. Rate limits
  3. Validation of user input
  4. Human-in-the-loop confirmation points
class PolicyGate
def validate_action(user, action, parameters)
return false unless user.can_perform?(action)
return false if rate_limit_exceeded?(user)
return false unless valid_parameters?(parameters)
true
end
end

Minimal Prompt Changes

A small, seemingly innocuous change can completely break your agent. A single sentence instead of a paragraph — the golden rule! LLMs are sensitive, and overly prescriptive guidance can cause them to hallucinate new actions or break JSON contracts.

Even if this becomes less true over time, avoid frequent changes and, above all, version them.

Extensive Testing and Observability

Set up automated nightly evaluations and test harnesses to lock in behaviors. Everything the agent does must be traced, inspected, and explainable — it’s essential for accountability and trust.

The Future of AI Agents

More Human Agents


Anthropomorphism (yes, it’s a real word!) will intensify. Agents will develop distinct personalities, build relationships with users, and align with brand values.


Sub-agent Hierarchies


The future will feature architectures where each sub-agent has its own responsibility, following the Single Responsibility Principle (SRP) we know well. No more monolithic agents!


Native Workflows


Rather than cobbling together workflows around an LLM, future systems will model workflows natively with branching, retries, checkpoints, and rollbacks.


Ethical Considerations


Transparency and Accountability


Users must understand what the agent does and be able to influence it. Observability is not just a technical issue — it’s an ethical imperative.


Bias and Fairness


Question the LLMs you use. Are they biased? The 'dogfooding' (using your own tools) is essential to detect these issues.


Security and Retention Policies


Put in place clear security and data retention policies. Your agents handle sensitive information!


Augmentation vs Replacement


Let’s focus on augmenting human capabilities rather than replacing developers. Agents should allow us to focus on what really matters — like spending more time with our loved ones!


Conclusion


AI agents represent a fundamental evolution in building software systems. We are moving from traditional programming paradigms to systems that reason, learn, and adapt. With Ruby on Rails, we have all the tools needed to build robust and ethical agents.


If you want to learn how to make your AI-Native applications, check out my article on the MCP protocol and how to implement it in Rails.



You can find the original video on YouTube:




Share it: