Artificial Intelligence (AI) is rapidly transforming the machinery of government. Across the United States, public agencies are embracing AI in government programs to automate routine tasks, inform policy decisions, and enhance citizen services. This shift towards intelligent automation is reshaping core democratic institutions – from city halls to federal departments – by making them more efficient, data-driven, and responsive. Importantly, government CIOs and digital transformation leaders are looking beyond hype to strategic, enterprise-level deployments of AI that uphold public values like transparency and accountability.
The result is a new paradigm of public sector AI: one where automation augments human decision-making and policy execution at scale, rather than replacing the human judgment at the heart of democracy. In this blog, we explore how automation is reinventing government operations, the rise of “decision intelligence” frameworks in the public sector, and how modular, multi-agent AI systems (including Klover’s AGD™, P.O.D.S.™, and G.U.M.M.I.™ frameworks) support transformative change in U.S. government institutions.
The New Era of AI-Powered Government Operations
Government agencies have entered a new era of modernization fueled by AI and automation. Traditional bureaucratic processes – often bogged down by paperwork and legacy IT systems – are being streamlined through technologies like robotic process automation (RPA), machine learning, and digital assistants. In fact, more than 40% of U.S. federal agencies had experimented with AI tools by 2020, though only about 15% were using highly sophisticated AI (e.g. machine learning) at that time.
This indicates that many agencies started with simpler automation (like RPA or rule-based systems) and are now poised to expand into more advanced enterprise automation. The motivation is clear: agencies face pressure to “do more with less” and meet rising citizen expectations for fast, convenient public services. AI offers a path to radically improve internal efficiency and public-facing service delivery.
Key benefits of AI-driven automation in government include:
- Efficiency Gains: Intelligent automation can handle high-volume repetitive tasks (data entry, form processing, eligibility checks) much faster than humans, freeing up employees for higher-value work. For example, the U.S. Navy’s financial management office deployed 159 RPA bots, saving an estimated 161,000 labor hours in one year. These efficiency gains translate into faster processing of citizen requests and reduced backlogs.
- Cost Savings and Scalability: Automating routine workflows helps agencies reduce operational costs. Once an AI “bot” is developed, it can scale to handle surges in workload (such as seasonal spikes in benefit applications) without proportional cost increases. The U.S. Department of Agriculture (USDA) found that process automation allowed in-house teams to identify bottlenecks and streamline operations in a fraction of the time and cost of traditional IT projects.
- Improved Accuracy and Compliance: AI systems, when properly trained, perform tasks with high consistency and can reduce human error in processes like financial reconciliations, auditing, and data transfers. Automated checks also ensure that rules and policies are applied uniformly. Agencies like the Food and Drug Administration use AI to detect adverse drug events, and the IRS has piloted AI to flag fraudulent filings – improving accuracy in enforcement.
- Better Citizen Experience: By digitizing services and adding AI, agencies can respond to the public faster and more effectively. Chatbots and virtual assistants (often powered by natural language AI) are now handling common inquiries for city and state services 24/7, reducing wait times for answers. For instance, several U.S. cities have deployed AI chatbots for 311 information services, allowing residents to get quick answers about trash pickup schedules, permit procedures, or COVID-19 updates without human intervention.
These benefits illustrate why analysts consider AI a cornerstone of government transformation. Gartner, for example, has identified “AI for decision intelligence” and “adaptive automation” among the top technology trends in government for 2024. Notably, Gartner predicts that by 2026, over 70% of government agencies will use AI to enhance human decision-making and measure significant productivity gains as a result. In short, AI-driven automation is no longer a futuristic concept – it is a present reality driving the largest wholesale modernization in government history. Forward-looking CIOs are seizing this moment to fundamentally reengineer how their agencies work.
The era of AI in government operations is here. Early automation wins – from federal RPA bots saving thousands of hours to city chatbots engaging citizens – demonstrate measurable improvements. As agencies build on these successes, they are laying the groundwork for more ambitious AI initiatives that go beyond efficiency into the realm of intelligent, data-informed governance. Next, we examine how this evolution is giving rise to decision intelligence in the public sector and the frameworks guiding its implementation.
Decision Intelligence: From Data to Informed Policy
Implementing AI in government isn’t just about automating tasks – it’s also about augmenting decision-making. Public sector leaders are increasingly adopting the discipline of decision intelligence, which Gartner defines as “the measured, systematic use of AI as a component of improving government mission achievement faster, more accurately and with sustainable resource use”). In practice, decision intelligence means using data-driven AI tools to inform choices at every level of governance, from one-off strategic decisions (like where to invest budget resources) to high-volume operational decisions (like approving permit applications). This is a significant shift for institutions that have traditionally relied on human expertise, static rules, and historical precedent to make decisions. With AI, agencies can leverage real-time data, predictive analytics, and even simulation to guide more intelligent decision processes.
How AI is enhancing decision-making in government:
- Data-Driven Insights: AI systems can analyze vast datasets (such as economic indicators, program metrics, or social media feedback) far faster than human analysts. This capability helps agencies uncover patterns and trends that inform policy.
- Policy Simulation and Scenario Planning: Multi-agent systems – where multiple AI “agents” simulate the behaviors of various actors – are becoming valuable tools for policy analysis. Government teams can use these AI-driven simulations to model complex scenarios (economic changes, population shifts, disaster responses) and see how different policy choices might play out.
- AI Decision Support Systems: Rather than replacing officials, modern AI acts as a digital assistant to amplify human judgment. Advanced decision support systems can integrate numerous data sources and stakeholder inputs to present officials with recommended options or risk assessments.
- Continuous Learning and Feedback: Unlike static rules, AI systems can continuously learn from new data. This means decision processes augmented by AI can improve over time. If an algorithm advising a social services agency on resource allocation makes a less-than-optimal recommendation, the outcomes (and feedback from agency staff) can be fed back into the model to refine future suggestions.
A notable framework in this realm is Klover’s Artificial General Decision-Making (AGD™), which reimagines AI’s role as a collaborative force for human decision-makers. Instead of pursuing AI that operates autonomously with superhuman intelligence, AGD™ focuses on AI that can generalize across decision contexts to assist humans. The idea is to provide expert-level support for any decision an official or citizen needs to make – whether it’s routine (e.g. approving a benefits claim) or strategic (e.g. planning a new infrastructure project).
By leveraging multi-domain AI agents and “intuitive intelligence” tuned to individual decision-maker needs, AGD™ aims to turn every government staff member into a sort of augmented super-decision-maker. In practical terms, an AGD™-driven assistant could help a city manager dynamically weigh budget trade-offs, or guide a citizen through personalized decision paths for accessing services. Early concepts like decision intelligence in government focus on specific data-driven decisions, but AGD™ goes further: striving for a general-purpose decision augmentation that learns and adapts to all the unique decisions government actors face.
Of course, implementing AI for decision support in the public sector must be done responsibly. Researchers emphasize that agencies need clear standards, transparency, and ethical safeguards when embedding AI into decision processes. The goal is to avoid bias or black-box algorithms and instead ensure AI recommendations are explainable and aligned with public values.
Klover’s AGD™ framework directly addresses these concerns by building explainability and ethical considerations into each decision-agent it deploys. For example, if an AGD system suggests a policy change, it would also provide the rationale and evidence behind that suggestion – enabling leaders to justify decisions to stakeholders (and auditors) in a democratic context. This alignment of AI-driven insights with human oversight and societal norms is critical for reshaping democratic institutions without eroding trust.
Modular AI Architectures and the P.O.D.S.™ Approach
As agencies adopt AI solutions, one practical challenge looms large: integration with sprawling government IT ecosystems. Most public sector IT environments are a patchwork of legacy systems, modern cloud services, and various databases, all of which must work together. This is where modular AI architectures are making a big difference. Instead of monolithic AI systems, governments are favoring modular, interoperable AI components that can plug into existing workflows with minimal disruption. Klover’s P.O.D.S.™ framework (short for Point-of-Decision Systems) is a prime example of this approach. P.O.DS.™ advocates building AI capabilities as discrete, self-contained “pods” that tackle specific functions – and can be combined like building blocks to create larger solutions. For public sector CIOs, such modular AI design is a game-changer for scalability and agility.
Principles and advantages of modular AI in government:
- Interoperability: Modular AI services communicate through standard interfaces (APIs), making it easier to integrate them with legacy applications or cross-agency data hubs. Because the AI is encapsulated as a service, it can be updated or replaced independently without overhauling the entire system. P.O.D.S.™ follows this principle by ensuring each AI “pod” can feed its outputs into various decision points or data pipelines as needed, acting as a plug-and-play digital solution.
- Reusability: Once a modular AI component is developed, it can be reused across multiple contexts. A great example is a natural language processing (NLP) pod designed to analyze citizen feedback. A city could use this same pod to analyze open-ended survey responses, social media comments, or public testimony transcripts across different departments. By contrast, a one-off AI system might be siloed to a single program. Reusable pods save costs and speed up deployment.
- Scalability and Flexibility: Modular architecture allows government IT leaders to start small and scale what works. You can pilot one AI pod in a department, prove its value, then replicate it more widely. This incremental approach aligns with agile methodologies and reduces risk. It also means AI capabilities can be composed in new ways as requirements evolve.
- Maintainability and Governance: Smaller, modular AI components are easier to manage and audit. Each pod can have its own performance metrics and logs, aiding in monitoring outcomes and detecting issues. If one AI module exhibits bias or errors, it can be debugged or improved in isolation. This fine-grained governance helps maintain public trust in AI systems, as it’s easier to certify that each module meets standards (rather than trying to certify a huge opaque AI platform).
In practice, moving to modular AI requires a cultural shift in government IT procurement and development. Agencies historically purchased large enterprise systems from vendors; now they are starting to assemble solutions from smaller components, some built in-house and some by third parties. The U.S. General Services Administration (GSA) has encouraged this approach through initiatives promoting microservices and APIs in government software design. Klover’s P.O.D.S.™ framework gives public sector teams a concrete blueprint for implementing modular AI: it provides reference architectures and best practices to deploy AI pods that align with government’s unique requirements (security, FedRAMP compliance, etc.).
For instance, P.O.D.S.™ might guide an agency on how to containerize an AI model for easy deployment on their cloud, or how to orchestrate multiple AI services within a container cluster for high availability.
Modular AI architectures like P.O.D.S.™ represent a strategic approach to AI deployment in government. By breaking capabilities into manageable, interoperable pods, agencies can overcome the integration hurdles that often derail tech projects. Modular AI enables incremental progress – a new algorithm here, an automation there – that cumulatively leads to big transformation. Just as importantly, it aligns with public sector needs for flexibility, oversight, and enterprise-level scalability. Klover’s P.O.D.S.™ framework gives government leaders a roadmap to implement AI not as a single solution, but as an evolving ecosystem of digital building blocks that reshape how democratic institutions operate from the inside out.
Multi-Agent Systems and Enterprise Automation (G.U.M.M.I.™)
In the private sector, companies like to say “no single AI can do it all” – they often deploy multi-agent systems where different AI agents specialize and collaborate. The same is becoming true in government settings, especially as automation initiatives mature. Rather than relying on one AI system, advanced government applications are using swarms of AI agents – each with a focused role – to tackle complex, large-scale tasks. This could mean dozens or even hundreds of AI processes running in parallel, interacting with each other and with humans. Managing such an ecosystem requires robust orchestration.
Enter Klover’s G.U.M.M.I.™ framework, which stands for Graphic User Multimodal Multiagent Interfaces. G.U.M.M.I.™ provides the architectural and governance structure to harness many AI agents as a cohesive, coordinated force in an organization. For public sector leaders with enterprise-wide automation ambitions, this approach ensures that scaling up AI doesn’t result in chaos, but rather in a powerful “team of AIs” working in concert.
Complex Problem-Solving
Public sector challenges—like disaster response or infrastructure planning—often require simultaneous coordination across logistics, communications, and resource allocation. Multi-agent systems allow specialized AI agents to handle each layer while a central orchestrator integrates outputs. This creates adaptive, collaborative AI environments suitable for high-stakes scenarios, as highlighted by the Greystones Group.
Parallel Processing at Scale
Agencies like U.S. Citizenship and Immigration Services process millions of records annually. Multi-agent systems can divide tasks across fleets of AI agents to operate simultaneously, drastically reducing throughput times. Klover.ai envisions billions of agents collaborating at scale, with G.U.M.M.I.™ unifying them under shared protocols to ensure coordination and knowledge sharing.
Specialization with Unity
Each government function—from permitting to cybersecurity—can benefit from AI agents trained on domain-specific data. G.U.M.M.I.™ ensures that these specialist agents don’t operate in silos, but rather contribute to a collective intelligence layer. This enables real-time coordination between departments, improving overall institutional performance through unified multi-agent orchestration.
Resilience and Redundancy
Redundancy across agents boosts reliability. If one fails or outputs flawed logic, others compensate or flag errors. G.U.M.M.I.™ applies principles of high-reliability design by enabling agent ensembles to cross-verify outputs—similar to peer-reviewed decisions—supporting secure and mission-critical workflows in areas like grant scoring or benefit approvals.
Coordinated, Modular Transformation
Klover’s G.U.M.M.I.™ acts as the connective tissue of an agent-based AI infrastructure, linking specialized pods (P.O.D.S.™) and augmenting them with AGD™’s decision-intelligence framework. In a unified system, a state agency can deploy agents for eligibility checks, predictive modeling, and resource allocation—all orchestrated in real time to support caseworkers and policy leaders in making fast, informed decisions.
Supportive Case Study: Intelligent Automation in Action (U.S. Federal Agencies)
To ground these concepts, let’s look at how AI and automation are already being applied in U.S. government institutions today. Several federal agencies have launched enterprise-level automation programs that illustrate the benefits and lessons of AI integration.
One prominent example is the U.S. Department of Agriculture (USDA). The USDA established an Intelligent Automation Center of Excellence to spearhead RPA and AI projects across its many sub-agencies. Initially, efforts focused on automating routine administrative processes in finance and human resources. According to a report by Scoop News Group, the USDA’s automation team worked within the CFO’s office to turn a patchwork of bot deployments into a department-wide service. Early wins included bots for handling travel voucher approvals and cross-checking financial data between systems, which significantly reduced processing times. A key success factor was fostering a generation of “citizen developers” – USDA employees trained to identify automation opportunities and even build simple bots themselves.
This approach empowered staff in the field to contribute to automation, scaling up adoption rapidly. As of 2024, USDA had implemented dozens of automation use-cases, and was evolving from basic RPA to more intelligent solutions like AI-powered invoice scanning and advanced analytics for program data. Brian Mohr, USDA’s Assistant Secretary for Administration, noted that automation has helped relieve workload burdens as the agency modernizes its systems, acting as a “force multiplier” for better mission outcomes.
The USDA case demonstrates how a large federal agency can roll out enterprise automation incrementally – starting with modular bots (akin to P.O.D.S.™ units) and progressing towards integrated, smarter workflows – all while upskilling its workforce for sustainability.
Another case comes from the U.S. Navy’s financial management branch, which undertook automation to streamline operations. The Department of the Navy (DON) deployed a wave of RPA bots to reduce the manual burden of financial data reconciliation and audits. By 2023, the DON had 159 automation scripts in production, saving approximately 161,500 labor hours annually. These bots handled tasks like pulling data from multiple accounting systems, compiling monthly financial reports, and flagging anomalies for auditors to review.
The time savings freed personnel to focus on analysis and decision-making rather than number-crunching. Importantly, the Navy did not stop at basic task automation; it began exploring process mining tools to identify further bottlenecks and applying AI to assist with predictive analytics in budgeting.
This progression aligns with the concept of moving from task automation to decision intelligence. The Navy’s experience also highlighted the need for governance – they developed standard procedures and an oversight council to ensure bots operated correctly and securely within their complex IT environment. This mirrors the importance of frameworks like G.U.M.M.I.™ for overseeing a growing fleet of digital workers (in this case, RPA bots as simple agents).
These case studies reinforce several important themes. First, starting with clear, narrow tasks (like processing forms or answering FAQs) builds momentum and proof-points for AI in government. Success breeds buy-in for expanding into more complex intelligent automation. Second, the human element – training staff, establishing governance, and maintaining transparency – is pivotal. Agencies that treat AI as a collaboration between humans and machines (not a black box replacement for staff) tend to gain trust and adoption more readily. Third, modularity and scalability are evident: the most successful projects deploy multiple small solutions across different programs rather than one giant system. This portfolio approach resonates with the P.O.D.S.™ strategy of modular services and the multi-agent philosophy of G.U.M.M.I.™. Finally, these examples show that public sector AI is not theoretical – it’s already delivering concrete results, from faster financial audits to round-the-clock citizen services.
Supportive Research Insights: Towards Human-Centric AI in Government
Academic and industry research provides strong support for the direction that U.S. government AI adoption is heading – namely, human-centric, decision-focused, and accountable automation. Scholars studying “AI and democracy” often emphasize that while AI can greatly enhance government effectiveness, it must be implemented in a way that upholds democratic values and doesn’t erode public trust (Toussaint & Weil, 2021). In other words, the how of automation is just as important as the what. Here are a few key insights from recent research that align with the frameworks and practices discussed above:
Aligning AI with Organizational Goals and Values:
A 2022 integrative review by Straub et al. highlighted that public agencies need to embed AI systems with clear operational procedures and normative criteria – effectively linking AI outputs to the agency’s mission and ethical standards. The authors propose that AI in government should be evaluated on operational fitness (does it improve performance?), epistemic completeness (are its recommendations based on sound evidence and knowledge?), and normative salience (does it respect values like fairness and privacy).
This three-part evaluation mirrors the approach of frameworks like Klover’s AGD™, which explicitly incorporates ethical oversight and bias mitigation in its design. By focusing AI on decision augmentation rather than replacement, AGD™ and similar approaches ensure that human values remain central. In practice, this means an AI system helping a judge in sentencing or an official in allocating funds should be transparent and checkable, aligning with legal standards and policies – not operating as a mysterious algorithmic edict.
The Augmented Workforce:
Public administration researchers observe that AI is changing the nature of government work, but not necessarily eliminating jobs. Instead, many roles are evolving into augmented roles where civil servants work alongside AI tools. This calls for new training and change management. Government CIOs must champion a vision in which employees see AI as empowering. Klover’s emphasis on “Humanizing AI to help people make better decisions” is very much in line with this thinking.
Studies in local governments have found that when employees are involved in automation projects (like the USDA’s citizen developers), job satisfaction can actually improve because mundane tasks are lifted off them and they can focus on more impactful work. The academic consensus is that enterprise automation will succeed only if the public sector workforce is reskilled to work with AI and trusts the tools provided. This underscores the importance of frameworks that include the human-in-the-loop, as AGD™ does by design.
Risks: Bias and Transparency Challenges:
Numerous academic case studies have cautioned about AI’s pitfalls, from biased algorithms in criminal justice to opaque AI denying citizens benefits without explanation. These cautionary tales reinforce why government AI must have robust governance. A concept emerging in research is “algorithmic accountability,” which in government context means mechanisms to audit and explain AI decisions. For example, if an AI system is used to determine eligibility for a housing program, the agency should be able to explain how that decision was reached and allow appeals. Klover’s frameworks support this by prioritizing explainable AI (XAI) techniques and by advocating for oversight boards to review AI ethics.
Moreover, the modular AI approach helps here: if each P.O.D.S.™ module’s function is well-defined and logged, it’s easier to pinpoint where a decision might have gone wrong or which component introduced bias. This modular transparency is far preferable to a monolithic “black box.” The G.U.M.M.I.™ orchestration can also record the chain of agent interactions leading to a decision recommendation, providing an audit trail for accountability.
Impact on Democratic Processes:
Beyond internal operations, thinkers are examining how AI might transform broader democratic processes like policy-making and citizen participation. An intriguing notion is that AI could help “mass collaboration” in democracy – for instance, analyzing citizen input at scale to shape policy drafts, or facilitating deliberative forums with AI mediators. While still experimental, these ideas resonate with the multi-agent approach. One could envision a multi-agent system (per G.U.M.M.I.™) where some agents represent different stakeholder perspectives or analyze public comments, feeding into a decision process that a human leader ultimately oversees.
The hope, as expressed by experts at Harvard’s Ash Center, is that AI might revitalize democracy by making institutions more responsive and informed. However, they also caution that misuse of AI (for misinformation or mass surveillance) could harm democracy. This double-edged sword means public sector AI must be implemented with a clear positive vision and strong safeguards – a challenge that frameworks like AGD™ take on by explicitly aiming to empower individuals and society rather than concentrate unchecked power in AI.
The scholarly and policy research community supports the trajectory of AI in government that is augmentative, modular, and values-driven. The academic sources reinforce why Klover’s approach – Artificial General Decision-Making to enhance human decisions, P.O.D.S.™ for modular deployment, and G.U.M.M.I.™ for orchestrating agents – is not just technically savvy but also aligned with what thought leaders consider responsible innovation. By following these principles, government agencies can harness AI’s benefits while managing its risks, ensuring that automation truly reshapes democratic institutions for the better – making them more effective, equitable, and worthy of citizens’ trust.
Conclusion
AI and automation are ushering in a renaissance in how the government works. Democratic institutions, long seen as slow and procedural, are being reinvented as agile, insight-driven organizations thanks to intelligent automation. In the United States, we already see the glimmers of this transformed public sector – IRS bots expediting finance work, city chatbots assisting residents, AI models guiding policymakers with data, and multi-agent systems coordinating complex operations. The vision that emerges is compelling: a government that is smarter, faster, and more responsive to citizen needs, yet also more transparent and accountable in its decisions. Achieving this vision at scale will require not just technology, but strategy and leadership.
This is where frameworks like Klover’s AGD™, P.O.D.S.™, and G.U.M.M.I.™ play a crucial role. They offer a playbook for public sector leaders to implement AI thoughtfully and effectively. Artificial General Decision-Making (AGD™) keeps the focus on augmenting human judgment – ensuring that as we automate, we are enhancing the wisdom and effectiveness of public servants, not sidelining them. Point-of-Decision Services (P.O.D.S.™) provides a modular, flexible approach to build AI capabilities that slot into government systems piece by piece, delivering quick wins and long-term adaptability. And Graphic User Multimodal Multiagent Interfaces (G.U.M.M.I.™) enables the scaling of AI across the enterprise – orchestrating a symphony of specialized agents so that the institution as a whole becomes more than the sum of its parts.
Together, these frameworks embody a strategic and technically rigorous path for digital transformation in government, one that resonates with the priorities of CIOs and innovation leaders: improve mission outcomes, empower the workforce, and maintain public trust.
In embracing AI, government leaders must remain vigilant about ethics, equity, and the rule of law. Automation should never be about abandoning the principles of democracy, but about reinforcing them – by making institutions more capable of delivering on their promises. The reshaping of democratic institutions by AI should lead to agencies that can listen better (through data), act faster (through automation), and decide wiser (through decision intelligence). It should free public servants from drudgery so they can engage more with the people they serve. And it should open new channels for evidence-based policymaking and citizen participation, enhancing the dialogue between government and society.
That future is already unfolding now – and it’s an exciting time to be part of it.
References
Brookings Institution. (2023, July 19). AI can strengthen U.S. democracy — and weaken it. Brookings. https://www.brookings.edu/articles/ai-can-strengthen-u-s-democracy-and-weaken-it
Engstrom, D. F., Ho, D. E., Sharkey, C. M., & Cuéllar, M.-F. (2020). Government by algorithm: Artificial intelligence in federal administrative agencies. Administrative Conference of the U.S. https://www.acus.gov/report/government-algorithm-artificial-intelligence-federal-administrative-agencies
Gartner. (2023). Top technology trends in government for 2024: AI for decision intelligence. Gartner. https://www.gartner.com/en/industries/government-public-sector/topics/government-technology
Giest, S., & Klievink, B. (2022). Augmented bureaucracy—The changing nature of public administration in the age of artificial intelligence. Journal of European Public Policy, 29(7), 1018–1037. https://doi.org/10.1080/13501763.2022.2095001
Goldsmith, S., & Mulligan, C. (2023). AI, democracy, and government innovation. Harvard Ash Center. https://ash.harvard.edu/issues/democracy-and-ai
Greystones Group. (2023). The potential of multi-agent systems (MAS) in the federal government. https://greystonesgroup.com/the-potential-of-multi-agent-systems-mas-in-the-federal-government/
Johnson, K. (2020, February 19). Stanford and NYU: Only 15% of AI federal agencies use is highly sophisticated. VentureBeat. https://venturebeat.com/ai/only-15-of-ai-federal-agencies-use-is-highly-sophisticated-according-to-stanford-and-nyu-report/
Kitishian, D. O. (2025, March 1). Google Gemini on why Klover’s approach to AI & decision making is the best way forward. Medium. https://medium.com/@danykitishian/google-gemini-on-why-klovers-approach-to-ai-decision-making-is-the-best-way-forward-aadb76bf5539
Misuraca, G., van Noordt, C., & Boukli, A. (2020). Exploring the use and impacts of artificial intelligence in public services. Publications Office of the European Union. https://data.europa.eu/doi/10.2760/039619
Newell, S., & Marabelli, M. (2020). Strategic opportunities (and challenges) of algorithmic decision-making: A call for action on the long-term societal effects of ‘datification.’ Journal of Strategic Information Systems, 29(4), 101618. https://doi.org/10.1016/j.jsis.2020.101618
Noveck, B. S. (2021). Solving public problems: A practical guide to fix our government and change our world. Yale University Press.
Richardson, R., Schultz, J. M., & Crawford, K. (2019). Dirty data, bad predictions: How civil rights violations impact police data, predictive policing systems, and justice. New York University Law Review, 94(1), 192–233. https://www.nyulawreview.org/wp-content/uploads/2019/04/NYULawReview-94-Richardson-Schultz-Crawford.pdf
Scoop News Group. (2024, March 20). How automation and AI are streamlining traditional government IT modernization. FedScoop. https://fedscoop.com/how-automation-ai-streamline-government-it-modernization/
Straub, V. J., Morgan, D., Bright, J., & Margetts, H. (2022). Artificial intelligence in government: Concepts, standards, and a perspective on research. The Alan Turing Institute. https://www.turing.ac.uk/sites/default/files/2023-11/straub-et-al-2022-ai-in-gov.pdf
Toussaint, A., & Weil, D. (2021). Artificial intelligence and democratic values: Opportunities and risks. Brookings Working Paper. https://www.brookings.edu/research/artificial-intelligence-and-democratic-values-opportunities-and-risks/
U.S. Department of the Navy. (2023, November 15). Fiscal year 2023 financial statement audit demonstrates reform and readiness. U.S. Navy. https://www.navy.mil/Press-Office/News-Stories/Article/3591529/department-of-the-navy-fiscal-year-2023-financial-statement-audit-demonstrates/
Zuurmond, A., Bekkers, V., & Fenger, M. (2021). Algorithmic accountability in government: A conceptual framework. Information Polity, 26(2), 123–138. https://doi.org/10.3233/IP-200296