Progress Update
The Tech Accord signatory companies are sharing the progress they have made against the eight core commitments within the Accord between its signing in February 2024 and September 2024.
Progress by company
Adobe is committed to working together with industry and government to combat the deceptive use of AI in the 2024 elections. As part of our commitment to the Munich Tech Accord, Adobe has taken important steps to counter harmful AI content:
Adobe is a co-founder of the Content Authenticity Initiative (CAI) and co-founder and steering committee member of the standards organization, the Coalition for Content Provenance and Authenticity (C2PA), a Joint Development Foundation project within the Linux Foundation. We are committed to collaborating across CAI and C2PA members to ensure open technical standards for provenance are maintained to the highest standards; used to develop and implement content provenance across the digital ecosystem which is interoperable; and ultimately adopted by international standards organizations as the gold standard for helping to combat misinformation. As part of this work, the C2PA has a working group led by a civil society organization that is dedicated to threats and harms mitigation.
Adobe has also invested in the development of free, open-source technology called Content Credentials, which leverages the C2PA open technical standard and acts like a “nutrition label” for digital content. Anyone can use Content Credentials to show important information about how and when the content was created and edited, including whether AI was used. Adobe applies Content Credentials to images generated with Adobe Firefly, our family of creative generative AI models, to provide transparency around AI use. In addition, Adobe allows creators to apply Content Credentials to their work through other popular Adobe Creative Cloud applications including Adobe Photoshop and Adobe Lightroom.
Adobe is committed to advancing the mission of the CAI and C2PA. Since signing the Tech Accord, we have helped grow the CAI’s membership to more than 3,300 members globally. We also continue to promote and drive widespread adoption of Content Credentials, including releasing an open-source video player that will make Content Credentials visible on videos. As we approach the U.S. presidential election, Adobe recently worked with party conventions to drive awareness and adoption of Content Credentials, helping them add verifiable attribution to official campaign materials that were created during the conventions. And as part of our work to ensure the public understands how to leverage provenance tools such as Content Credentials, Adobe has supported the development of the CAI’s media literacy curriculum aimed at providing students with critical media and visual literacy skills to help them better navigate today’s AI-powered digital ecosystem.
Over the past seven months, we’ve continued to refine and execute on our comprehensive, proactive approach to safeguarding elections around the globe.
In May 2024, we updated our Usage Policy to more clearly prohibit the use of our products to interfere with the electoral process, generate false or misleading information regarding election laws and candidate information, or engage in political lobbying or campaigning. We’ve conducted rigorous Policy Vulnerability Testing before major elections globally, including in India, South Africa, Mexico, France, the United Kingdom, and the European Union, and published a detailed blog post on our testing methodology including a public set of quantitative evaluations in June 2024. Our testing has informed improvements to our safety systems and enables our Trust and Safety team to anticipate, detect, and mitigate harmful misuse of our models in election-related contexts.
We’ve helped spread awareness of safety interventions that can prevent generative AI from being misused by bad actors in global elections. We briefed European Commission staff on our election integrity research and interventions ahead of the June EU parliamentary elections, informed multiple state and federal US policymakers of our work, and have been in touch with US civil society organizations and global policymakers throughout this election cycle. Looking ahead, we’ll continue to deepen our collaboration with industry, civil society, and policymakers to prepare for the US election.
ElevenLabs remains steadfast in our commitment to the resolutions set forth in the Tech Accord to Combat Deceptive Use of AI in 2024 Elections, and to working with industry and government to safeguard the integrity of democratic processes around the globe.
Transparency, Provenance, and Deepfake Detection
Enabling clear identification of AI-generated content is one key aspect of ElevenLabs’ responsible development efforts. To promote transparency, we publicly released an AI Speech Classifier, which allows anyone to upload an audio sample for analysis as to whether the sample was generated with an ElevenLabs tool. In making our classifier publicly available, our goal is to prevent the spread of misinformation by allowing the source of audio content to be more easily assessed. We also are working with AI safety technology companies to improve their tools for identifying AI-generated content, including in election-related deepfakes. For example, we have partnered with Reality Defender, a cybersecurity company specializing in deepfake detection, to leverage our proprietary models and methods to improve the efficacy and robustness of their tools. This will enable Reality Defender’s clients, including governments and international enterprises, to detect and prevent AI-generated threats in real time, safeguarding millions from misinformation and sophisticated fraud. In addition, we believe that downstream AI detection tools, such as metadata, watermarks, and fingerprinting solutions, are essential. To that end, we continue to support the widespread adoption of industry standards for provenance as a member of the Coalition for Content Provenance and Authenticity (C2PA) and the Content Authenticity Initiative.
Additional Safeguards to Prevent the Abuse of AI in Elections
We have implemented various safeguards on our platform that are designed to prevent AI audio from being abused in the context of elections. We are continuously enhancing our abuse prevention, detection, and enforcement efforts, while actively testing new ways to counteract misuse. Under our Terms of Service and Prohibited Use Policy, we prohibit the impersonation of political candidates, and we are continuously expanding our automated tools that prevent the creation of voice clones that mimic major candidates and other political figures. We also prohibit the use of our tools for voter suppression, the disruption of electoral processes (including through the spread of misinformation), and political advertising. We continuously monitor for and remove content that violates this policy through a combination of automated and human review. Further, we are committed to ensuring that there are consequences for bad actors who misuse our products. Our voice cloning tools are only available to users who have verified their accounts with contact information and billing details. If a bad actor misuses our tools, our systems enable us to trace the content they generated back to the originating account. After identifying such accounts, Elevenlabs takes action as is appropriate based on the violation, which could include warnings, removal of voices, a ban on the account and, in appropriate cases, reporting to authorities.
Reporting Violations
We take misuse of our AI voices seriously and have implemented processes for users to report content that raises concerns. For example, we provide a webform through which users can identify their concerns and add any documentation that will help us address the issue. We endeavor to take prompt action when users raise concerns with us, which can and has resulted in permanently banning those who have violated our policies.
Outreach, Policymaking, and Collaboration
We are collaborating with governments, civil society organizations, and academic institutions in the US and UK to ensure the safe development and use of AI, including raising awareness around deepfakes. We are a member of the U.S. National Institute of Standards and Technology’s (NIST) AI Safety Institute Consortium, and have participated in work by the White House, the National Security Council, and the Office of the Director of National Intelligence regarding AI audio. We are also working with Congress on efforts to prevent AI from interfering with the democratic process, including supporting the bipartisan Protect Elections from Deceptive AI Act, led by Senators Amy Klobuchar, Josh Hawley, Chris Coons, and Susan Collins, which would ban the use of AI to generate materially deceptive content falsely depicting federal candidates in political ads to influence federal elections.
2024 is the largest global election year in history and it has challenged GitHub to consider what is at stake for developers and how we can take responsible action as a platform. Although GitHub is not a general purpose social media platform or an AI-powered media generating platform, we are a code collaboration platform where users may research and develop tools to generate or detect synthetic media. In line with our commitments as a signatory of the AI Elections Accord, in April 2024 GitHub updated our Acceptable Use Policies to address the development of synthetic and manipulated media tools for the creation of non-consensual intimate imagery (NCII) and disinformation, seeking to strike a balance between addressing misuses of synthetic media tools while enabling legitimate research on these technologies. Following the implementation of this policy, GitHub has actioned repositories for synthetic media tools that were designed for, encourage, or promote the creation of abusive synthetic media, including disinformation and NCII.
Google and YouTube’s Progress Against the Tech Accord to Combat Deceptive Use of AI in 2024 Elections
This year, more than 50 national elections — including the upcoming U.S. Presidential election — are taking place around the world. Supporting elections is a core element of Google’s responsibility to our users, and we are committed to doing our part to protect the integrity of democratic processes globally. Earlier this year, we were proud to be among the original signatories of the Tech Accord to Combat Deceptive Use of AI in 2024 Elections.
Google has long taken a principled and responsible approach to introducing Generative AI products. The Tech Accord is an extension of our commitment to developing AI technologies responsibly and safely, as well as of our work to promote election integrity around the world.
In line with our commitments in the Tech Accord, we have taken a number of steps across our products to reduce the risks that intentional, undisclosed, and deceptive AI-generated imagery, audio, or video (“Deceptive AI Election Content”) may pose to the integrity of electoral processes.
Addressing Commitments 1-4 (Developing technologies; Assessing models; Detecting distribution; and Addressing Deceptive AI Election Content)
Helping empower users to identify AI-generated content is critical to promoting trust in information, including around elections. Google developed Model Cards to promote transparency and a shared understanding of AI models. In addition, we have addressed harmful content through our AI Prohibited Use Policy, begun to proactively test systems using AI-Assisted Red Teaming, and restricted responses for election-related queries across many Generative AI consumer apps and experiences. We have also deeply invested in developing and implementing state-of-the-art capabilities to help our users identify AI-generated content. With respect to content provenance, we’ve expanded our SynthID watermarking toolkit to more Generative AI tools and to more forms of media, including text, audio, images, and video. We were also the first tech company to require election advertisers to prominently disclose when their ads include realistic synthetic content that’s been digitally altered or generated, and we recently added Google generated disclosures for some YouTube Election Ads formats. YouTube now also requires creators to disclose when they’ve uploaded meaningfully altered or synthetically generated content that seems realistic, and adds a label for disclosed content in the video’s description. For election-related content, YouTube also displays a more prominent label on the video player for added transparency. In addition, Google joined the C2PA coalition and standard as a steering committee member and is actively exploring ways to incorporate Content Credentials into our own products and services including Ads, Search, and YouTube.
Addressing Commitments 5-7 (Fostering cross-industry resilience; Providing transparency to the Public; and Engaging with Civil Society)
Our work also includes actively sharing our learnings and expertise with researchers and others in the industry. These efforts extend to increasing public awareness by, for example, actively publishing and updating our approach to AI, sharing research into provenance solutions, and setting out our approach to content labeling.
Artificial intelligence innovation raises complex questions that no one company can answer alone. We continue to engage and collaborate with a diverse set of partners including the Partnership on AI, ML Commons, and we are a founding member of the Frontier Model Forum, a consortium focused on sharing safety best practices and informing collective efforts to progress safety research. We also support the Global Fact Check Fund, as well as numerous civil society, research, and media literacy efforts to help build resilience and overall understanding around Generative AI. Our websites provide more details regarding our approach toward Generative AI content and the U.S. election, as well as the recent EU Parliamentary elections, general election in India, and recent elections in the U.K. and France.
We look forward to further continuing to engage with stakeholders and doing our part to advance the AI ecosystem.
Enhancing Transparency through ChatEXAONE
In August 2024, LG AI Research launched ChatEXAONE, an Enterprise AI Agent, based on the EXAONE 3.0 model. As a starting point, this service aims to improve the productivity of employees within the LG Group by addressing the accuracy and reliability of the content produced, which is an issue with existing generative AI. ChatEXAONE’s strength lies in providing references and evidence from reliable sources for the generated contents. In the election, AI-driven deception often thrives in the absence of verifiable sources. By consistently citing its sources and presenting evidence, ChatEXAONE not only improves the accuracy of the information but also supports the wider ecosystem in promoting trust and accountability. This critical feature aligns with the core commitments of the Tech Accord, which emphasizes the need for developing and implementing technology to mitigate risks related to Deceptive AI Election content.
Driving AI Literacy and Ethical AI Usage
While technological safeguards are essential, we recognizes that combating deceptive AI content requires more than just tools—it also requires informed users. To address this, LG AI Research has been investing in educational campaigns aimed at raising AI literacy. Through programs like the LG Discovery Lab and AI Aimers, offering practical education to younger generations about the ethical use of AI technologies. Moreover, LG’s collaboration with UNESCO to develop an AI ethics curriculum, set to be launched globally by 2026, underscores its long-term commitment to fostering responsible AI use. This initiative will play a pivotal role in equipping users worldwide with the knowledge to recognize and mitigate the risks associated with AI-driven misinformation, especially during elections. In line with the Tech Accord’s goals, these educational efforts ensure that the public understands both the benefits and the potential dangers of AI.
By prioritizing transparency through tools like ChatEXAONE and driving education around AI literacy, LG is actively contributing to the global effort to ensure AI is used ethically during elections.
Work undertaken since February 2024, illustrating progress made against the Code’s goals and commitments.
To further the Tech Accord’s goals, LinkedIn now labels content containing industry-leading “Content Credentials” technology developed by the Coalition for Content Provenance and Authenticity (“C2PA”), including AI-generated content containing C2PA metadata. Content Credentials on LinkedIn show as a “Cr” icon on images and videos that contain C2PA metadata. By clicking the icon, LinkedIn members can trace the origin of the AI-created media, including the source and history of the content, and whether it was created or edited by AI.

C2PA metadata helps keep digital information reliable, protect against unauthorized use, and create a transparent, secure digital environment for creators, publishers, and members. More information about LinkedIn’s work is in our C2PA Help Center article.
LinkedIn also prohibits the distribution of synthetic or manipulated media, such as doctored images or videos that distort real-life events and are likely to cause harm to the subject, other individuals or groups, or society at large. See Do not share false or misleading content. Specifically, LinkedIn prohibits photorealistic and/or audiorealistic media that depicts a person saying something they did not say or doing something they did not do without clearly disclosing the fake or altered nature of the material.
Election security is a critical issue that McAfee has supported globally for many years. McAfee has also been a leader in researching and protecting customers from the risks that AI technology can pose, through a combination of technology innovation and consumer education.
This August, with the upcoming U.S. election on the horizon, McAfee launched a new product, McAfee Deepfake Detector, aimed at helping consumers detect AI-generated content and other deepfake material – an initiative of extraordinary importance in combating misinformation during this critical time. McAfee Deepfake Detector works directly in users’ browsers on select AI PCs, alerting them when any video they are watching contains AI-generated audio content. McAfee began with audio because its threat research has found that most deepfake videos used AI-manipulated or AI-generated audio as the primary way to mislead viewers. McAfee is committed to expanding availability of this product to more platforms and further developing its technology to identify other forms of deepfake content, such as images. In addition, McAfee is exploring other avenues that sit at the intersection of content distribution and consumption to leverage its deepfake detection technology to help identify deceptive AI election content and other deepfake content.
Beyond its products, McAfee has also long been committed to educating consumers to help them stay safe online and avoid fraud, scams, and misinformation. To that end, McAfee recently launched the Smart AI Hub at mcafee.ai, which has resources and interactive elements to build awareness of deepfakes and AI-driven scams. McAfee’s education efforts also include blogs and social posts about deceptive AI election content and how to recognize it, including articles about specific instances of such content. In addition to its own publishing, McAfee also regularly engages with news outlets to provide expertise and help spread awareness on these issues.
Meta Investments in AI Transparency, Standards, and Public Engagement
Meta works across many channels to align with the goals of the AI Tech Accords, including public reporting, contributions to industry coalitions and standards organizations, releasing of open source tools, and a variety of mechanisms in the user-facing surfaces of our products and services. Our technical research teams continue to push the state of the art forward, open sourcing a new method for more durable watermarking of AI-generated images and audio detection, and releasing an update to our open LLM-powered content moderation tool LlamaGuard. We continue to fine tune our AI models and conduct extensive red teaming exercises with both internal and external experts. We have supported the Partnership on AI’s assembly of a Glossary of Synthetic Media Transparency Methods to improve fluency across industry, academia, and civil society on the range of methods available to support AI transparency. In September 2024 we joined the Steering Committee of the Coalition for Content Provenance and Authenticity (C2PA.org) to help guide the development of the C2PA provenance specification and other efforts to address the problem of deceptive online content through technical approaches to certifying source and history of media content.
Since January of 2024, we have required advertisers to disclose the use o