By Lina Ghazal, Head of Regulatory and Public Affairs
Artificial intelligence (AI) is expanding at a remarkable pace, with nine out of ten top-tier companies worldwide incorporating some element of this technology into their day-to-day operations. As a result, new use cases for AI are continually surfacing, allowing businesses from all sectors to leverage superior data analytics and reduce manual labor, thereby boosting their efficiency.
Given today’s dynamic business landscape, companies must integrate AI to stay ahead of the competition. But they must do this while adhering to existing and potential regulations that could affect their operations. While AI legislation is just getting off the ground, companies need to strategize for the future and equip themselves for potential legal shifts they could be accountable for.
Regulations governing the application of AI will probably pertain to its development and deployment. However, companies should also zero in on particular scenarios for specific sectors. This will be especially significant for content moderation, a process aimed at eliminating illicit, irrelevant, or harmful content from digital platforms.
We are already seeing an increase in AI-generated images, many being age-restricted or even illegal. AI-generated deepfakes have multiplied by 10 across industries worldwide. In North America alone, there was a staggering rise of 1740% from 2022 to 2023. Well-known figures and law enforcement are highlighting the impact of harmful AI-generated content.
Yet, it remains uncertain how future laws will address content moderation. These changes are expected to be significant, so businesses must stay vigilant. Consequently, they will need to implement mechanisms to detect and eliminate unlawful content, including those produced by AI, and collaborate with similar-minded businesses to address the problem.
New Updates to AI Regulations
As AI continues to gain momentum, lawmakers worldwide are developing strategies for controlling the technology. The European Union’s Artificial Intelligence Act represents a significant stride towards a regulated AI landscape. Ratified on March 13, 2024, this is the first piece of regulation from a major legislative body and is anticipated to set the global standard in this field, likely to be succeeded by similar laws globally.
The law sorts AI usage into three categories: creating unacceptable risk, high-risk, and other uses, which are largely left unregulated. Although generative AI doesn’t fall under the high-risk category in the new legislation, it must comply with transparency requirements and EU copyright law.
To help EU businesses gear up for its execution, the regulator has established a tool to offer insights into regulatory obligations. This tool outlines the businesses and institutions that fall under the legislation, based on a definition of AI and the entity’s role.
The associated risk is connected to how AI is used and what the organization does. Therefore, companies can determine if they are under obligations when the recently ratified regulation takes effect. The precise timeline, however, remains uncertain.
For businesses operating in the EU, this legislation warrants careful examination to grasp its implications and range. When operating outside the EU, businesses should likewise keep track of how they are affected. With this new legislation now ratified, it’s expected to guide upcoming laws not only throughout the EU but also in the US and other regions.
Unlike the internet, where regulatory measures are still under discussion years following its introduction, AI legislation is expected to progress at a faster pace. Regulators, the business sector, and society at large have gleaned lessons from the internet, which was largely left unregulated to spur innovation and business growth.
However, the debate around online safety continues, making it essential for organizations to work with experts and like-minded companies to build a common understanding of the practical solutions at their disposal. A universal standard of thought will probably be established, shaped by the reality of AI implementation, the insights of third parties, and regulatory requirements.
Staying Ahead of the Curve
As legislation gets ratified, business leaders must stay abreast of essential documents, timelines, and the implications of upcoming AI legislation. Furthermore, they need to evaluate how they’re deploying or plan to deploy AI within their organizations to understand their compliance responsibilities. Like any regulation, compliance probably carries a financial burden that businesses need to consider.
For high-risk organizations, particularly small to medium-sized enterprises, it’s probably more cost-effective to outsource these solutions to specialists than to develop and implement technology internally. Acquiring comprehensive solutions from vendors curtails the need for in-house expertise and ensures that AI is utilized in an explainable manner, which is vital when under regulatory scrutiny.
For organizations affected by regulatory changes, a thorough understanding is crucial, as is staying informed of relevant regulations and guidelines in every market. For some business leaders, this task may feel overwhelming. However, there are simple measures they can take to stay up to date.
Firstly, business leaders should engage with peers, both in their industry and the wider ecosystem, to stay informed about new developments. As authorities in their field, these platforms can assist in addressing tangible issues related to the implementation of regulatory measures.
Companies operating in regulated domains should also conduct routine audits and risk assessments to understand AI systems, compliance, and risk. Before doing so, businesses should document their policies, procedures, and decision-making processes.
These can serve as proof of compliance or to offer transparency to regulators and partners. For a more unbiased perspective, these risk assessments can be carried out by third parties with extensive regulatory experience.
Business leaders also need to educate all staff members engaged in AI development and implementation. Robust training will help them understand their obligations concerning compliance and the ethical use of AI. By laying a foundation through training, companies can put continuous improvement strategies into action, proactively addressing upcoming challenges. As a result, AI governance will be enhanced based on feedback, lessons gleaned, and emerging best practices.
These practices will vary depending on industry, organization, company size, and function. In the case of content moderation, though, it involves aligning with similarly-minded businesses to put into action solutions that can identify age-restricted and illegal AI-generated content, so that appropriate measures can be taken.
Disclaimer: The views and opinions expressed in this guest post are solely those of the author(s) and do not necessarily reflect the official policy or position of The Cyber Express. Any content provided by the author is of their opinion and is not intended to malign any religion, ethnic group, club, organization, company, individual, or anyone or anything.







































