Auditing
Oftentimes, we hear words that generally may have a negative connotation around them. For many, the word “audit” or “auditing” could be such a word. However, in the case of technology, auditing is a requirement and a best practice to help protect an organization against potential security risks; examples of security risks are described earlier in this chapter. A technology audit is a review, like any other audit, to ensure that the organization controls put forth are in place and produce the results expected and/or uncover areas where there may be gaps in security controls and risks specific to generative AI, as described earlier in this chapter.
In the example that we briefly described at the end of the previous section regarding data grounded against HR personnel data records and managing views, this is an obvious place where additional security precautions are needed, and also additional scrutiny or audits/reviews are mandatory.
You may be wondering, “How?” Any LLM that has been grounded against your data should have safeguards in place against access to data that might be sensitive or confidential in nature, such as personnel records. As with a standard database, you will restrict access to such records. The same goes for generative AI; authentication and login are control mechanisms, so auditing to see who has had or can currently access this is important to ensure only the appropriate individuals or services have permission. Why not use a generative AI model to help here? After all, generative AI, as you know, can handle large amounts of data and help analyze transactional data, such as access, on many varieties of data services. Moreover, rather than a manual or occasional timeframe to start an audit process, perhaps the LLM can now run it on a regular basis or even run in real time, all the time! You can imagine how powerful such LLMs can be in helping an organization safeguard against security threats.
Many large hypercloud vendors, such as Microsoft Azure, provide both auditing and reporting. We covered Azure Monitoring in the previous chapter, which also has the ability to audit at the cloud platform level. That is, Azure can understand activity against an Azure OpenAI account, such as someone who creates a new AOAI account/service. Other tools such as Application Insights coupled with the Microsoft Fabric Reporting/Power BI, provide deeper application-layer insights and allow for the auditing of your generative AI applications.
As we learned, technology audits determine whether corporate assets are protected or need to be projected, ensuring data integrity persists and is aligned with the organization’s overall goals. While audits can capture details, breaches, or security gaps, if there is no actual review or action, then the audits can only go so far. This is where the other half of the equation of auditing comes into play: the actual reporting of the audit results.