Articles

  • AI governance is lagging behind

    The rapid development of artificial intelligence (AI) creates many and new opportunities for companies, but recently there is also a growing gap between the advancement of technology and the ability of the organization to control it. Recent surveys* show that while many businesses claim to have some form of AI governance process, only a fraction has actually achieved a mature and sustainable governance model. Furthermore, staff are not always risk-aware but have become well versed in using AI-tools, and more often than not they outpace the IT departments ability to manage these tools.

    AI requires the same governance discipline as information security and data protection 

    Given the fast-paced development of AI-technologies and tools and also organizations’ willingness to adopt them swiftly, we note that some challenges are apparent: 

    • AI is often implemented faster than organizations have time to build governance structures. Shadow IT is one contributor to this phenomenon. Limited engagement from management could be another. 
    • AI technology evolves at such a high pace that policies will need to adapt continuously. 
    • A lack of data governance makes it difficult to ensure developed AI-systems prove satisfactory quality in terms of transparency, traceability, and risk management. 
    • AI Ethics ensuring the fairness, transparency and accountability of an AI system is often only, but a message promoted on the company website. 
    • Many companies lack clear roles, responsibilities, and processes for managing AI-related decisions. 
    • Successful AI implementations often require cross-functional collaboration, that are not always well established in every organization. 

    To be clear, these challenges are nothing new under the sun. They reflect the same need for structure, methodology, and continuous improvement that has long existed in the field of information security and data protection. The difference is that AI currently runs at an accelerated pace that we struggle to keep up with, and with a complexity beyond the understanding of even the technically skilled staff. 

    We help companies build sustainable governance models 

    Cloudvisor provides services to support companies in establishing and managing processes that create security, clarity and compliance. We offer, among other things: 

    AI governance and policies 

    • AI Literacy training for leaders and staff – to comply with EU´s AI Act.
    • Support in the introduction of AI in operations, processes and products.
    • Setting up an organization wide AI governance structure. Defining roles and responsibilities to manage risks and enable continuous improvement.

    Quality Assurance & Compliance

    • Identify and implement quality processes and metrics to ensure an AI system under development can meet the principles of responsible AI, that is fair, transparent and accountable
    • Data classification, data governance, and PII risk management. 
    • Preparatory work to comply with the AI Act while maintaining compliance existing legislation such as GDPR and Data Act.

    References

    Referenced articles can be found here (Computer Sweden) and here (Trust at Scale: Why data governance is becoming core infrastructure for AI).

  • How to build resistance against ever-growing phising attacks

    A recent study performed on staff in a large hospital in the U.S.A. revealed an uncomfortable truth: phishing education often fails to change users’ clicking behavior.

    So should we continue to perform such trainings? Let us reflect on it.

    Malicious actors often use “phising” techniques to trick individuals into revealing sensitive information like passwords or financial details. Information that can be used for further compromising the target. It is often accomplished by the user clicking a link in an email sent to selected or many individuals.

    While we now know that phising and similar attacks have reached new levels with the introduction of GenAI, it is surprising to see that even after training employees continued to fall for phishing attempts – at similar rates as before the training!

    One can argue that this research is anecdotical, and it may be. The results could also indicate that the issue at hand is not just about knowledge but also about culture and context. Human behavior is often shaped by habits and norms at the workplace. Also stress can heavily influence your workday and in the long run – what links you click.

    To truly reduce risk, organizations must foster a culture of attentiveness, safety, and shared responsibility. Training must be continuous, contextual, and supported by leadership. Security isn’t just a technical challenge — it’s a human one. And until we treat it that way, phishing and similar activities will remain a persistent threat for many organizations.

    Read the full article here: UCSD Health.

  • GenAI and LLM Data Privacy ranking (2025)

    Privacy-focused company Incogni recently published their (2025) ranking of how privacy-safe the different Generative AI-service are. We found it a very interesting read, and with some surprises in it too.

    Specifically, it is somewhat troublesome to read how even the most experienced researchers struggled to find the relevant information on how users’ data was processed in the various tools.

    We can conclude that while AI as a technology has taken major leaps over the last few years, the AI-providers ability to provide clear and comprohensive information to users on how their data is processed has not really improved in the same way.

    Read the full article here: Incogni blog.