Until the remaining legal uncertainties regarding SCCs, DPF and other transfer mechanisms are settled, we will continue to face uncertainty as to whether our customers will be permitted to transfer personal data to the United States for processing by us as part of our platform services. Our customers may view data transfer mechanisms as being too costly, too burdensome, too legally uncertain or otherwise objectionable and therefore decide not to do business with us.
We publicly post documentation regarding our practices concerning the collection, processing, use and disclosure of data. Although we endeavor to comply with our published policies and documentation, we may at times fail to do so or be alleged to have failed to do so. Any failure or perceived failure by us to comply with our privacy policies or any applicable privacy, security or data protection, information security or consumer-protection related laws, regulations, orders or industry standards could expose us to costly litigation, significant awards, fines or judgments, civil and/or criminal penalties or negative publicity, and could materially and adversely affect our business, financial condition and results of operations. The publication of our privacy policy and other documentation that provide promises and assurances about privacy and security can subject us to potential state and federal action if they are found to be deceptive, unfair, or misrepresentative of our actual practices, which could, individually or in the aggregate, materially and adversely affect our business, financial condition and results of operations.
If our privacy or data security measures fail to comply with current or future laws and regulations, we may be subject to claims, legal proceedings or other actions by individuals or governmental authorities based on privacy or data protection regulations and our commitments to customers or others, as well as negative publicity and a potential loss of business. Moreover, if future laws and regulations limit our subscribers’ ability to use and share personal data or our ability to store, process and share personal data, demand for our solutions could decrease, our costs could increase, and our business, results of operations and financial condition could be harmed.
Our use of new and evolving technologies, such as AI, may present risks and challenges that can impact our business, including by posing cybersecurity and other risks to our confidential and/or proprietary information, including personal information, and as a result we may be exposed to reputational harm and liability
We currently use and integrate AI technologies into our business processes and services. Development, use, and deployment of these technologies present risks and challenges that could affect our business. If we enable or use solutions that draw controversy due to perceived or actual negative societal impact or otherwise cause harm, we may experience brand or reputational harm, competitive harm, legal liability, and defensive costs and expenses.
A growing number of legislators and regulators are adopting laws and regulations addressing, and have focused enforcement efforts on the development and adoption of AI technologies and use of such technologies in compliance with ethical standards and societal expectations. Several states, including Colorado, California, and Texas passed laws that will take effect in 2026 to regulate various uses of AI, from making consequential decisions to requiring companies to disclose sources of training data for AI technologies, among other requirements. In addition, U.S. states and the federal government have a wide variety of measures to regulate various facets of AI development, deployment, and use in recent years, and it is possible that new laws and regulations will be adopted in the near future, or that existing laws and regulations may be interpreted in ways that would affect our business and the ways in which we and our customers use our AI technologies, our financial condition and our results of operations, including as a result of the cost to comply with such laws or regulations. A proposed 10-year federal moratorium on the enforcement of certain state AI laws, which was included in the Trump Administration’s budget reconciliation bill, was excluded from the One Big Beautiful Bill Act (“OBBBA”). This, in turn, may further stimulate AI regulation in the states, contributing to a complicated legislative patchwork.
Outside the U.S., lawmaking and regulation relating to AI is proceeding at a similar pace. In Europe, for example, the EU’s Artificial Intelligence Act (“AI Act”) entered into force on August 1, 2024 and, with some exceptions, will begin to apply as of August 2, 2026. This legislation imposes significant obligations on providers and deployers of high-risk AI systems, and encourages providers and deployers of AI systems to account for EU ethical principles in their development and use of these systems. As we continue to develop and use AI systems that are governed by the AI Act or other emerging regulations, it may necessitate ensuring higher standards of data quality, transparency, and human oversight, as well as adhering to specific ethical, accountability, and administrative requirements, some of which may increase our costs and compliance obligations. Further, potential government regulation related to AI use and ethics may also increase the cost of research and development in this area, and failure to properly remediate AI usage or ethics issues may cause public confidence in AI to be undermined, which could slow adoption of AI in our products and services.
The rapid evolution of AI technologies and regulatory frameworks will require the application of significant resources to design, develop, test and maintain such systems to help ensure that AI is implemented in accordance with applicable law and regulation and in a socially responsible manner and to minimize any real or perceived unintended harmful impacts. The use of certain AI technologies can also give rise to intellectual property risks, including by disclosing or otherwise compromising our confidential or proprietary