In today’s rapidly evolving digital landscape, we can assess the impact of new technologies and adopt them faster than we understand the technologies. Generative Artificial Intelligence (AI) has been the latest on this list, with an annual adoption rate growing at 24.4%. Generative AI has a huge potential to revolutionize the finance and accounting (F&A) sector.
This state-of-the-art AI technology can be used in fraud detection by identifying anomalies in financial transactions and generating synthetic data to train fraud detection models. Generative AI can be a game-changer in compliance with its power to automate the review of compliance documents and identify potential risks. Deriving insights from the financial reports will be a cinch with the enhanced readability and clarity that generative AI offers. The possibilities of applications in banks and governments are endless. However, AI garnered a lot of discussion around ethical, regulatory, legal, and regulatory concerns. While these are valid concerns, they should not hinder the adoption of generative AI in the government and public sector, as we advance its use responsibly. Below are the five essential considerations for maximizing the potential of generative AI in the public sector.
Exacerbation of bias: Human accountability will become more crucial in the F&A teams as they rapidly adopt generative AI. A human layer of validation and a final sign-off on reports generated will be inevitable as per the experts. However, it is important to note that humans bring inherent biases with them, and these biases may be amplified in the content generated by AI. Financial leaders must ensure that the lens of materiality or conservatism of humans is not reflected in the training data and regularly evaluate content.
Reliability of Output: Generative AI simplifies financial analysis and effortlessly generates reports. This could be tempting for some stakeholders to bypass F&A and rely on seemingly smart generative AI advisors to make decisions. It is important to consider that the reliability of the output of these systems improves with time. Restricting the initial usage to increase the accuracy of inferences and staying vigilant in validating the output is highly encouraged. Having liaisons from F&A teams working with the stakeholders who will be impacted by the content generated by generative AI will help in verifying the output initially.
Security and risks: Most generative AI software available today does not guarantee the privacy of data. There are also risks such as cybersecurity vulnerabilities which could pose a challenge to the implementation of generative AI, especially in the public sector. Increasingly, private enterprises are considering developing private models trained in secure environments that can negate these risks. Governments and public sector organizations should partner with technology providers to ensure that the data used to train the models and data generated from the models is secured.
Sovereignty of data and data privacy: Generative AI does not use discretion in the content shared. When organizations input vast swaths of data into the training models, they may be doing so in a manner that increases the risk of data exposure and noncompliance with data privacy regulations. For instance, organizations that aren’t allowed to use certain third-party data, may not be able to control generative AI from using it. User controls should be in place to restrict access to certain data in these models. Furthermore, formulating clear regulations on data privacy and content usage concerning generative AI can benefit organizations in complying with the law.
Enhanced transparency: Building trust among end-users of generative AI models is imperative in the public sector. Improving AI literacy among non-experts and funding AI research and development can build trust among the public on generative AI-based systems in F&A. Finland launched a free online course on AI basics to educate non-specialists and 750,000 people have completed it globally. Such education can help users determine whether they can or cannot disclose sensitive financial data to the AI models.
The implementation of generative AI in the public sector demands heightened diligence. Vigilantly addressing the biases, mitigating security risks, scrutinizing the output accuracy, complying with data privacy regulations, and enhancing the transparency of the system are some of the crucial things to consider while implementing generative AI in the public sector.
As we continue to witness success stories of generative AI in other domains, it becomes evident that generative AI is not just a buzzword but a tangible force driving positive change, and soon it will be implemented in F&A. Embracing the capabilities of generative AI responsibly and ethically will result in a more responsive, efficient, and citizen-centric public sector.