One Article Review

Accueil - L'article:
Source ProofPoint.webp ProofPoint
Identifiant 8453202
Date de publication 2024-02-21 13:46:06 (vue: 2024-02-21 14:09:54)
Titre Comprendre la loi UE AI: implications pour les agents de conformité des communications
Understanding the EU AI Act: Implications for Communications Compliance Officers
Texte The European Union\'s Artificial Intelligence Act (EU AI Act) is set to reshape the landscape of AI regulation in Europe-with profound implications. The European Council and Parliament recently agreed on a deal to harmonize AI rules and will soon bring forward the final text. The parliament will then pass the EU AI Act into law. After that, the law is expected to become fully effective in 2026.   The EU AI Act is part of the EU\'s digital strategy. When the act goes into effect, it will be the first legislation of its kind. And it is destined to become the “gold standard” for other countries in the same way that the EU\'s General Data Protection Regulation (GDPR) became the gold standard for privacy legislation.    Compliance and IT executives will be responsible for the AI models that their firms develop and deploy. And they will need to be very clear about the risks these models present as well as the governance and the oversight that they will apply to these models when they are operated.  In this blog post, we\'ll provide an overview of the EU AI Act and how it may impact your communications practices in the future.  The scope and purpose of the EU AI Act  The EU AI Act establishes a harmonized framework for the development, deployment and oversight of AI systems across the EU. Any AI that is in use in the EU falls under the scope of the act. The phrase “in use in the EU” does not limit the law to models that are physically executed within the EU. The model and the servers that it operates on could be located anywhere. What matters is where the human who interacts with the AI is located.  The EU AI Act\'s primary goal is to ensure that AI used in the EU market is safe and respects the fundamental rights and values of the EU and its citizens. That includes privacy, transparency and ethical considerations.  The legislation will use a “risk-based” approach to regulate AI, which considers a given AI system\'s ability to cause harm. The higher the risk, the stricter the legislation. For example, certain AI activities, such as profiling, are prohibited. The act also lays out governance expectations, particularly for high-risk or systemic-risk systems. As all machine learning (ML) is a subset of AI, any ML activity will need to be evaluated from a risk perspective as well.  The EU AI Act also aims to foster AI investment and innovation in the EU by providing unified operational guidance across the EU. There are exemptions for:  Research and innovation purposes  Those using AI for non-professional reasons  Systems whose purpose is linked to national security, military, defense and policing  The EU AI Act places a strong emphasis on ethical AI development. Companies must consider the societal impacts of their AI systems, including potential discrimination and bias. And their compliance officers will need to satisfy regulators (and themselves) that the AI models have been produced and operate within the Act\'s guidelines.  To achieve this, businesses will need to engage with their technology partners and understand the models those partners have produced. They will also need to confirm that they are satisfied with how those models are created and how they operate.  What\'s more, compliance officers should collaborate with data scientists and developers to implement ethical guidelines in AI development projects within their company.  Requirements of the EU AI Act  The EU AI Act categorizes AI systems into four risk levels:  Unacceptable risk  High risk  Limited risk  Minimal risk  Particular attention must be paid to AI systems that fall into the “high-risk” category. These systems are subject to the most stringent requirements and scrutiny. Some will need to be registered in the EU database for high-risk AI systems as well. Systems that fall into the “unacceptable risk” category will be prohibited.  In the case of general AI and foundation models, the regulations focus on the transparency of models and the data used and avoiding the introduction of system
Envoyé Oui
Condensat €20 €35 2026 ability about academia accountability accuracy achieve across act act: actions active activities activity act  addressing adhere adopt advisory affect after agents agreed aims align all alliance allow already also annual any anywhere apply appoint approach appropriate are artificial assessment assistants associated attention audits authorities automated avoid avoiding aware banks bans based” became become been behavior below between beyond bias biased blog board bodies bring business businesses can case cases categorizes category cause certain certification chatbots citizens civil clear close collaborate collaboration colleagues collection  commission common communication communications communications  companies company competent compliance comply conclusion  confirm consent consequences consider considerations considers coordination could council countries created credit critical customer customers cybersecurity data database deal decision decisions defense deploy deployed deploying deployment designed destined details detection develop developers development digital discrimination does driven education effect effective effectiveness efficiency emphasis emphasizes enforce engage ensure ensuring especially essential establish establishes ethical europe european eu” evaluated evolving example examples executed executives exemptions existential expectations expected experts explanations explicit exploit face fails failure failures fall falls final financial fine fines firms first focus for:  forum forward foster foundation four framework fraud from fully fundamental future gdpr gdpr   general getting given global goal goes gold governance governing group guidance guidelines harm harmonize harmonized have healthcare held helping high higher how human hundreds identifies image impact impacts implement implemented implications importance important incident include includes including individuals industries industry informed infrastructure innovation institutions intelligence interactions interacts introduction investigations investment involvement issues its key kind landmark landscape larger law lays lead leader learning legal legislation level levels:  liabilities like like:  limit limited linked located longtime machine maintain making manipulate many market matter matters maximum may means measures  mechanisms meet member military million minimal minimize model models more most must national nature navigate need needed new non noncompliance noncompliance  not obligations obtain office officers one operate operated operates operational operations option organizations other out outcomes oversee oversight overview paid parliament part particular particularly partners pass penalties penalty perspective phrase physically pivotal places point policing  post potential potentially practices prefer present price primary privacy processes processing produced professional profiling profound prohibited prohibited   prohibitions projects proofpoint protection protocols provide providing provisions purpose purposes  quality real reasons  recently records reflect regarding regime registered regulate regulation regulations regulators regulatory related relationship relevant reporting representatives requirements research reshape respective respects response response  responsible revenues review right rights rigorous risk risks risk  risk” rules safe same satisfied satisfy scientists scope scoring scrutiny sectors security see servers services set several severe shape share should shown significant similar site societal society some soon stakeholders standard standards standard” start states stay strategy stricter stringent strong subject subset such suggests support system systemic systems systems” technology tested text them themselves then these those though through thus to:  transparency trust turnover types unacceptable under underscores understand understanding unfair unified union unreliable updates upholding ups usage  use used using values various very virtual vital vulnerabilit
Tags Vulnerability Threat Legislation
Stories
Notes ★★
Move


L'article ne semble pas avoir été repris aprés sa publication.


L'article ne semble pas avoir été repris sur un précédent.
My email: