pixel AI/ML Archives - Tech Reformers

Category Archive :AI/ML

Civic AI Agent

Local governments sit on a mountain of public information including municipal codes, ordinances, permitting requirements, and service guides. Yet, hardly anyone can find information they want. Residents call city hall to ask questions that are already answered in a public document. Staff spend time answering the same questions over and over. The information exists; it just isn’t accessible.

Civic AI Agent is our answer. It’s a conversational assistant that lets residents ask questions and get accurate, cited answers drawn directly from official city documents with no hallucinations, no guessing, no dead-end search results.

We recently built a proof of concept for the City of Ruston. The architecture is designed to be replicated for any city, school district, county agency, or public organization.

Civic AI Agent at https://ruston-ai.techreformers.com/

What Is Civic AI Agent?

Civic AI Agent is a web-based chat interface built on AWS serverless architecture. Residents type a question like “What are the noise ordinance rules for construction?” or “How do I apply for a home occupation permit?” and receive a grounded answer with citations linking back to the exact section of the municipal code or ordinance that supports it.

The system is not a general-purpose chatbot. It only answers from documents you have explicitly indexed. That means every answer is traceable, auditable, and grounded in official city content.

Who Is It For?

  • Cities and municipalities with municipal code, ordinances, city services, permitting
  • School districts with policies, handbooks, enrollment requirements, board decisions
  • County agencies with zoning regulations, health codes, public records
  • Any organization with a large body of public-facing documents that residents or stakeholders need to navigate

Why an Agent? Not Just a Chatbot?

This is the most important architectural decision we made, and it shapes everything about where the product can go.

A chatbot answers questions. If you ask “How do I get a pet license?” it tells you the steps. That’s useful, but it’s still just a search engine with intelligent phrasing.

An AI agent can act. It has access to tools like APIs, forms, databases, external systems. Agents can execute multi-step workflows on your behalf. The difference looks like this:

ChatbotAgent
“Here are the steps to apply for a pet license.”“I’ve started your pet license application. Here’s what I need from you to complete it.”
“Your utility bill is due on the 15th.” “I’ve scheduled your utility payment for the 14th. Here’s your confirmation.”
“The park permit form is at this link.”“I’ve submitted your park reservation request for Saturday. You’ll receive a confirmation by email.”

Civic AI Agent is built on Amazon Bedrock Agents, which provides the orchestration layer that makes this possible. In its current form, the agent answers questions grounded in city documents. But because it is a true agent, not a chatbot, it can be extended with action groups that connect to permitting systems, payment processors, scheduling platforms, or any REST API the city exposes.

The foundation is already in place. Adding new capabilities is a matter of connecting new tools, not rebuilding the system.

In Practice: The City of Ruston

The City of Ruston proof of concept indexes the full municipal code and key pages from the city website. Residents ask questions and receive cited answers linked back to the source document. The City Clerk’s office manages the knowledge base, monitors resident interactions, and identifies documentation gaps through the admin portal — no engineering involvement required for day-to-day operations.

The proof of concept is the first deployment of the architecture. As it matures, we will share what we learn about resident usage patterns, the kinds of questions that surface documentation gaps, and how agent-based civic tools perform in a real municipal context.

Key Capabilities

Natural Language Q&A with Source Citations

Residents ask questions the way they would ask a person. The agent retrieves the most relevant document chunks, synthesizes an answer, and cites the source including a direct link to the original document.

Sources cited

Retrieval-Augmented Generation (RAG)

Civic AI Agent uses the RAG pattern, the industry standard for grounded AI responses for internal resources. Rather than relying on a language model’s training data, every answer is assembled from documents you control. The model cannot fabricate information that isn’t in your knowledge base. We use Bedrock Knowledge Bases, which can crawl websites or point to specific knowledge stores.

Admin Portal with Knowledge Base Management

City Clerk’s staff manage the system through a secure admin portal. They can trigger crawls of city websites, monitor indexing jobs, and review every interaction residents have had with the assistant. No engineering involvement is required for day-to-day operations. They never need to use the AWS console to manage their data.

Manage Content

Interaction Analytics

Every question and answer is stored in a structured database. Administrators can browse and search all interactions to identify common resident questions, discover gaps in documentation, and improve city services.

Review all interactions online.

All interactions pass through Amazon Bedrock Guardrails, which filter harmful content, block prompt injection, and keep responses within the scope you’ve defined, a non-negotiable for a public-facing agent representing the city.

AWS Serverless Architecture

Civic AI Agent is built entirely on AWS managed services. There are no servers to patch, no infrastructure to capacity-plan, and no operational overhead beyond the application itself. It scales automatically from zero to thousands of concurrent users.

DIAGRAM: Architecture overview — Amplify → API Gateway → Lambda → Bedrock Agent → Knowledge Base → DynamoDB]

Infrastructure as Code with AWS CDK

Every resource in the system, Lambda functions, API Gateway stages, DynamoDB tables, Cognito user pools, CloudWatch alarms, is defined in code using AWS Cloud Development Kit (CDK). There is no manual console configuration. The entire infrastructure can be deployed from scratch with a single command.

CDK lets us write infrastructure in TypeScript, the same language the team already knows. Constructs are composable, type-safe, and version-controlled alongside the application code. When a new customer needs a deployment, the template is forked, a configuration file is updated, and `cdk deploy` stands up a complete, production-ready environment.

This approach means:

  • Repeatability – every deployment is identical; no manual steps that can be forgotten or misconfigured
  • Auditability – infrastructure changes go through code review, just like application code
  • Teachability – new engineers can read the CDK stacks and understand exactly what is deployed and why

One Resource Per Layer — Environment Isolation Without Duplication

Rather than deploying entirely separate infrastructure stacks for dev and prod, each AWS service provides its own isolation mechanism:

  • Bedrock Agents: One agent with two aliases — a draft alias routes to the in-development version (dev), a pinned alias routes to the stable promoted version (prod)
  • Lambda: One function per handler with `dev` and `prod` aliases — the dev alias always tracks the latest version; the prod alias is explicitly promoted after testing
  • API Gateway: One REST API with `dev` and `prod` stages — stage variables route each request to the matching Lambda alias at runtime, with no code duplication
  • Amplify: One app with `dev` and `prod` branches — the branch determines the environment automatically
  • DynamoDB / Cognito: Separate tables and user pools per environment — data and authentication isolation is required at these layers

One codebase. Two environments. No duplicated infrastructure stacks.

This pattern eliminates configuration drift between environments, reduces infrastructure cost, and mirrors exactly how AWS intends these services to be used.

Infrastructure changes (CDK stacks) follow the same review-and-promote pattern through AWS CodePipeline with an audit trail, approval gates, and a fully automated path from development code commit to production.

Observability Built In

Every deployment includes CloudWatch dashboards and alarms out of the box:

  • API Gateway request volume, error rates, and p99 latency
  • Lambda duration, error counts, and throttles
  • DynamoDB system health
  • AWS X-Ray distributed tracing for end-to-end request visibility

Why These Choices Matter

Every architectural decision in Civic AI Agent is defensible and teachable. We chose Amazon Bedrock Agents over building our own orchestration layer because the agent primitive is where AWS is investing. Guardrails, and knowledge base integration are first-class features, not glue code we have to maintain. We chose CDK over console configuration because infrastructure that only exists as clicks in a console can’t be code-reviewed, can’t be diffed, and can’t be reproduced from scratch — and that means your team doesn’t really own it. We chose aliases and stages over duplicated stacks because that is how AWS designed these services to support multiple environments.

For a technical buyer, this matters in two ways. First, nothing here is exotic. Any engineer fluent in AWS serverless patterns can read the CDK, understand the Bedrock Agents configuration, and extend the system. Second, every pattern in this build is one we teach in our instructor-led classes. When we hand this system to your team, they are not inheriting a black box. They are inheriting a reference implementation of the same patterns they will see in training.

AWS Well-Architected Framework Alignment

Civic AI Agent is built to the AWS Well-Architected Framework across all six pillars:

AWS Well-Architected PillarHow We Address It
Operational ExcellenceAWS CDK infrastructure as code, CI/CD via Amplify and CodePipeline, CloudWatch dashboards and alarms
SecurityIAM least-privilege roles, Cognito MFA, Bedrock Guardrails, no hardcoded credentials
ReliabilityServerless and multi-AZ managed services
Performance EfficiencyServerless auto-scaling and graceful error handling
Cost OptimizationPay-per-request on Lambda, DynamoDB, and AI model invocation, no idle compute, shared resources across environments
SustainabilityServerless eliminates over-provisioned capacity; shared Bedrock Agents reduces resource duplication

Deploying for a New City

The architecture is designed to be replicated. Each new customer gets their own AWS account with a full, independent deployment, not a shared multi-tenant system. That means complete data isolation, independent scaling, and no blast radius between customers. Because the entire stack is defined in CDK, standing up a new city is a matter of forking the template, updating a configuration file, and running cdk deploy. A new base environment is live in under an hour. The real work is defining your data sources and how you want your agents to work.

About Tech Reformers

Tech Reformers is an AWS Advanced Services Partner and AWS Authorized Training Partner (ATP). We design and build cloud-native solutions on AWS, and we train the developers who build and the DevOps engineers who maintain them.

Tech Reformers logo

Civic AI Agent is both a production product and a reference implementation, every architectural decision demonstrates the AWS patterns our instructors teach in the classroom. When we hand a project to a client’s engineering team, they receive working code and the training to own it.

If you’re a city, school district, or public agency interested in deploying Civic AI Agent, contact us. If you’re an engineering organization looking to upskill your team on AWS serverless and generative AI patterns, see our upcoming instructor-led training.

AI learners decorative image

In today’s rapidly evolving tech landscape, generative AI has emerged as a game-changing technology with immense potential across industries. While off-the-shelf AI solutions like ChatGPT or Copilot have gained popularity, many businesses are discovering the need for more customized, secure, and scalable AI solutions. This is where Amazon Web Services (AWS) shines. AWS offers a comprehensive suite of generative AI tools and services designed for enterprise-grade applications.

AI profile generated by Stability AI SD3 Large 1.0

Made with Amazon Bedrock and generated by Stability AI SD3 Large 1.0.

AWS’s generative AI offerings stand out for their ability to provide customization and control over model selection and deployment. So, importantly, you choose from a variety of model to fit your need. You build knowledge bases with your data. The enhanced data privacy and security measures makes sure all data stays in your custody. Seamless integration with existing AWS infrastructure leverages the most extensive cloud. These solutions offer the scalability and high performance needed for demanding enterprise applications. Along with cost optimization through granular resource control, you save money over off-the-shelf solutions. With services like Amazon Bedrock, Amazon SageMaker, and Amazon Q, AWS provides the flexibility and power to build, train, and deploy state-of-the-art AI models tailored to your specific business needs.

To help tech professionals harness the full potential of these powerful tools, Tech Reformers is excited to offer the “Developing Generative AI Applications on AWS” course this Fall. This intensive 16-hour program equips participants with the skills and knowledge to create cutting-edge generative AI applications using Amazon Web Services.

AWS Authorized Training Partner logo

Course Highlights: From Basics to Advanced Applications

This AWS-authored course begins by laying a solid foundation in generative AI concepts and their implementation on AWS. So, immediately, participants will gain hands-on experience with key AWS services. You learn to use Amazon SageMaker for building, training, and deploying machine learning models at scale. Leverage AWS Lambda for creating serverless, event-driven applications. Build with Amazon Bedrock, a fully managed service that offers state-of-the-art foundation models from leading AI companies. Understanding these services is crucial for developing robust, scalable generative AI applications in the AWS ecosystem.

A significant portion of the course is dedicated to the art and science of prompt engineering. Participants will delve into basic and advanced prompt techniques, model-specific optimizations, and strategies to mitigate bias and address potential misuses. This knowledge is essential for effectively communicating with AI models and obtaining desired outputs.

The course goes beyond theory, providing practical insights into building generative AI applications. Participants will explore working with datasets and embeddings, implementing Retrieval Augmented Generation (RAG), applying model fine-tuning techniques, and securing generative AI applications. The curriculum also covers LangChain, a powerful framework for developing applications that leverage large language models (LLMs).

To bridge the gap between theory and practice, the course delves into various architecture patterns for generative AI applications. Through hands-on demonstrations, participants will gain practical experience in implementing patterns for text summarization, question answering systems, chatbots, and code generation using AWS services and tools.

Who Should Attend?

AI learners decorative image

While the course is primarily designed for software developers, it offers valuable insights for a broader range of tech professionals. Therefore, systems architects and systems engineers with Python skills will find the course content highly relevant to their work. The comprehensive coverage of AWS services and generative AI concepts makes this course an excellent opportunity for anyone looking to expand their skills in this cutting-edge field.

Empowering Your Career with Generative AI Skills

By completing this course, participants will be well-equipped to design and implement generative AI solutions using AWS services. They’ll learn to optimize AI model performance through advanced prompt engineering techniques and develop secure and scalable generative AI applications. Finally, perhaps most importantly, they’ll have the skills to apply generative AI to solve real-world business problems.

These skills are increasingly in demand across industries, opening up new career opportunities and pathways for innovation within organizations. As generative AI continues to transform the tech landscape, professionals with expertise in AWS generative AI services will be well-positioned to lead this revolution.

Join the Generative AI Revolution with Tech Reformers

In conclusion, don’t miss this opportunity to be at the forefront of the generative AI revolution. Tech Reformers, an AWS Authorized Training Provide will deliver Developing Generative AI Applications on AWS course as a comprehensive, hands-on learning experience that will empower you to create the next generation of AI-powered applications.

Enroll today and take the first step towards mastering generative AI on AWS. Your journey to becoming a generative AI expert starts here, with a course designed to give you practical, in-demand skills that will set you apart in the rapidly evolving field of AI.

As the AI landscape continues to evolve, those who can harness the power of AWS generative AI services will be at the forefront of innovation. Don’t let this opportunity pass you by – join us and unlock the full potential of generative AI on AWS.

SFTP with AWS Transfer Family

Organizations need efficient and secure file transfer methods. They can reap the benefits of SFTP on AWS. AWS Transfer Family offers a robust solution for managing file transfers using various protocols, including SFTP (SSH File Transfer Protocol). This service simplifies the setup and management of file transfers, providing numerous benefits for businesses of all sizes.

SFTP with AWS Transfer Family

Key Benefits of AWS Transfer Family:

Easy Setup:

  • Setting up an SFTP server with AWS Transfer Family is straightforward. With just a few clicks in the AWS Management Console, you can create a server and configure it to meet your specific requirements.

Flexible Authentication:

  • AWS Transfer Family supports multiple authentication methods, including AWS Directory Service, IAM roles using just the service itself, and custom identity providers like Microsoft Active Directory. This flexibility allows you to choose the authentication method that best suits your needs.

Scalability:

  • AWS Transfer Family scales effortlessly as your business grows to accommodate increased file transfer demands. You can easily adjust server capacity and storage to match your requirements.

Security:

  • AWS Transfer Family offers built-in security features to protect your data during transfer. It supports encryption in transit and at rest, ensuring that your files remain secure at all times.

Integration with S3:

  • AWS Transfer Family integrates seamlessly with Amazon S3, allowing you to store files in S3 buckets. This integration simplifies file management and provides a scalable storage solution.

Cost-Effective:

  • ‘With AWS Transfer Family, you only pay for what you use. There are no upfront fees or long-term commitments, making it a cost-effective solution for file transfer needs.’With AWS Transfer Family, you only pay for what you use. There are no upfront fees or long-term commitments, making it a cost-effective solution for file transfer needs.

By leveraging AWS Transfer Family, businesses can streamline their file transfer processes, improve security, and scale their operations efficiently. Whether you’re a small business or a large enterprise, AWS Transfer Family offers the flexibility and scalability you need to manage your file transfer requirements effectively.

To take advantage of the benefits of SFTP on AWS and learn more about setting up an SFTP server using AWS Transfer Family, check out our detailed guide: SFTP (SSH File Transfer Protocol) in AWS Transfer Family – Setup Instructions. This quick how-to guide will walk you through the process of creating an SFTP server and configuring it to meet your specific needs.

AI text gen icon

ChatGPT and generative AI is having a significant impact on multiple industries and how people are learning. Generative AI is a subset of machine learning. Machine learning models power ChatGPT and include large learning models (LLMs) and multi-modal models that can include text, images, video, and audio.

Artificial Intelligence in action on a laptop
Photo by Matheus Bertelli: https://www.pexels.com/photo/woman-laptop-working-internet-16094040/

To begin, note that Artificial intelligence (AI) is nothing new with Amazon Web Services. Examples of AI/ML models include Alexa, Amazon’s Just Walk Out, and Amazon Prime. Tech Reformers uses AI/ML in its document processing solution. OpenAI released ChatGPT to the public in November 2022. Within two months, it reached 100 million monthly active users. Researchers and those working on Neural Linguistic Programming (NLP) projects use ChatGPT. In sum, AI can be used for different tasks and is well-trained on data from textbooks, articles, and websites.

What is Amazon Bedrock

Natural-language processing has been around for a while at AWS. Years ago, AWS introduced Amazon Comprehend, an NLP service that uses machine learning to find insights and connections in text. Just recently, Amazon launched Amazon Bedrock in its AI/ML services. Amazon Bedrock is an easy way to build and scale generative Artificial Intelligence applications with foundation models (FMs). Foundation models are AI neural networks that are trained on raw data and can be adapted to accomplish a wide range of tasks. Bedrock provides the flexibility to choose from a wide range of foundational models built by AI startups and Amazon itself. Therefore, this allows Bedrock customers to select the best model for their needs and goals.

In true cloud computing fashion, Bedrock is a serverless service. Accordingly, it can allow customers to get started quickly. They can customize foundation models with their own data, and integrate them into applications. In short, all this can be done without having to manage any of the infrastructure.

The foundation models that Bedrock supports are Jurassic-2, Claude, Stable Diffusion, and Amazon Titan. Data scientists train Amazon Titan FMs on large datasets. Ultimately, this makes them powerful, general-purpose models that can be used as-is or by customers privately with their own data.

Use cases for Amazon Bedrock are:

  • Text generation
  • Chatbots
  • Search
  • Text Summarization
  • Image generation
  • Personalization

Get started with key use cases quickly

Text Generation icon
Text generation

Create new pieces of original content, such as short stories, essays, social media posts, and webpage copy.

Chatbots icon
Chatbots

Build conversational interfaces such as chatbots and virtual assistants to enhance the user experience for your customers.

Search icon
Search

Search, find, and synthesize information to answer questions from a large corpus of data.

Text Summarization icon
Text summarization

Get a summary of textual content, such as articles, blog posts, books, and documents, to get the gist without having to read the full content.

Image Generation icon
Image generation

Create realistic and artistic images of various subjects, environments, and scenes from language prompts.

Image Classification icon
Personalization

Help customers find what they’re looking for with more relevant and contextual product recommendations than word matching.

To sign up for this new service, complete this short form at https://pages.awscloud.com/generative-AI-interest-learn.html.

Tasha Penwell photo. She write about cloud and Artificial Intelligence.

Tasha Penwell is an AWS Educator, Authorized Instructor, and a Certified Solutions Architect. She is also a subject matter expert (SME) in web development, cloud security, and cloud computing. As a speaker, she talks about AWS education and AR technologies.

Tech Reformers Chat
Open Tech Reformers Chat