AI and Law: Navigating Emerging Legislation

AI and Law: Navigating Emerging Legislation

Doug Gapinski

November 07, 2024

AI is reshaping industries like legal tech, healthcare, and transportation. Generative AI (GenAI) is transforming the legal industry through custom AI models, data quality, and human-in-the-loop processes. But legal structures governing AI are still emerging.

Existing laws, such as those addressing data privacy and discrimination, cover parts of what AI encompasses but don’t get to the heart of the concerns AI introduces — specifically questions of fair use around training data, bias, accountability, and the rights of those impacted by these systems.

Lawmakers understand new frameworks are necessary to tackle these complexities. As with all emergent tech, the law must strike a balance between encouraging innovation and ensuring that these technologies are developed and used responsibly.

This article explains why legislation is nascent, identifies areas where precedents are evolving, and provides a few takeaways for companies considering how to develop generative AI responsibly.

 

Why Is Generative AI Regulation Lagging Behind?

The absence of laws governing generative AI may seem surprising given that OpenAI introduced GPT-1 in 2018. Today, no federal regulations are in place to oversee this technology. What may appear as a slow response from lawmakers stems from a cautious “wait-and-see” stance common in emerging tech regulation. Policymakers often avoid legislating preemptively, preferring to react after any significant adverse effects become clear. This approach avoids passing laws based on hypothetical scenarios, which could prove irrelevant or too restrictive as technology evolves.

Because AI’s complexity requires specialized understanding, lawmakers can struggle to assess the implications of new policies. Ethical and legal questions surrounding AI, such as issues of bias, copyright, and privacy, are often beyond the expertise of legislators. This knowledge gap makes it difficult to draft informed policies, especially as tech companies guard the details powering their innovations.

In the U.S., AI’s relevance also overlaps multiple regulatory authorities, such as the FTC, FCC, and Department of Commerce — all of which operate with differing priorities. In a scenario like this, the industry may either expect delays as federal agencies work together or for multiple regulatory frameworks to emerge from different organizations.

Beyond institutional barriers, powerful tech corporations including Google, Meta, Amazon, and Nvidia actively lobby to influence legislation, often advocating for a hands-off approach that allows innovation to flourish. Lobbying can slow down regulatory efforts, as lawmakers weigh economic growth against potential risks. There’s an underlying fear that aggressive regulation could put the US at a competitive disadvantage globally, discouraging investment and innovation.

Although the slow pace of regulation aligns with past patterns of cautious lawmaking for new technology, companies at the forefront of AI must remain vigilant and prepared for policy changes.

 

Copyright and Data Privacy: Legal Challenges in AI

There are a number of areas of law where AI has the potential for emergent legislation. Here are a few examples.

Copyright Law and Training Models. Major record labels (Universal, Sony, and Warner Music) are suing AI music platforms, like Suno and Udio, for allegedly using copyrighted songs without permission to train algorithms. The outcome of this case could set a precedent for what counts as fair use in the world of AI-generated music specifically — or for training data at large.

Data Rights and Privacy. AI often overlaps into data concerns, as seen in high-profile cases like Clearview AI, which amassed a vast facial recognition database by scraping publicly available images without individuals' consent. Although Clearview has settled its lawsuit, this case illustrates how current laws are being tested by AI’s capabilities. Future legislation may aim at setting stricter boundaries on data collection and use, in some cases requiring explicit consent from consumers, and protecting individual privacy against invasive AI applications. Expect more data boundaries around consent and privacy protections for consumers.

Autonomous Systems Liability and Accountability. AI systems can make mistakes that impact public safety or health, which increases the likelihood of potential legislation. Tesla has faced lawsuits related to accidents involving its autonomous driving systems, while healthcare AI solutions, such as IBM’s Watson, have experienced backlash and reputational harm due to alleged misdiagnoses. These incidents point to how ambiguous it is to assign responsibility among developers, manufacturers, and users in AI-driven systems. As autonomous systems become more integrated into daily life, the legal system likely needs to provide clearer guidelines on liability.

The Role of Bias and Fairness in AI-driven Decisions. Your AI is only as good as your data. The ethical and fairness implications of AI-driven decisions are already generating scrutiny, especially in areas like employment. A recent complaint involving HireVue, an AI-based hiring platform, alleged that the company’s algorithms were biased against certain groups. Although HireVue addressed the complaint by modifying its product, the case underscores the potential for bias in algorithmic decision-making. It’s also worth noting that potential regulation in this area could apply to data itself (rather than to LLM technology). In a case where an LLM embeddings based on biased data, the trouble becomes the data itself, not the embeddings. 

 

What Can You Do to Prepare for Changing AI Legislation

  1. Stay in the loop. AI laws are evolving rapidly worldwide, and staying updated is essential to avoid sudden disruptions. Actively monitor regulatory developments, especially in key regions like the EU and US, where policymakers are setting global standards. Engage with relevant industry groups (Partnership on AI, OECD AI Principles, or IEEE Ethically Aligned Design) as they provide early insights into potential regulatory shifts. By participating in these discussions, your company may anticipate changes and adapt proactively.
  2. Follow best practices. Don’t wait for AI regulations to enforce ethical standards. Adopt frameworks such as those from the OECD or IEEE now, as they’re becoming the blueprint for responsible AI. Implementing frameworks early shows a commitment to ethical AI, which can strengthens your brand’s reputation. Regularly revisit and update your ethical guidelines to stay aligned with the latest industry standards, and consider incorporating frameworks that address fairness, accountability, and transparency.
  3. Get compliant. Although broad AI regulations are still emerging, existing data privacy laws like GDPR, CCPA, and industry-specific standards set a solid compliance foundation. Collect only essential data, secure it rigorously, and ensure transparency in how it’s handled. Build systems that anonymize and protect sensitive information. Regular internal audits can help identify vulnerabilities, saving you from costly privacy breaches, fines, and reputational damage.
  4. Be explainable. Users, clients, and regulators need to understand how your AI reaches decisions. Invest in systems (or implementations of AI) that provide clear, interpretable explanations of outcomes. For important outputs or critical use cases, document how it works, the data it relies on, and any steps taken to mitigate bias. Routine auditing will helps identify any “black box” elements that need clarification.
  5. Set up a responsible AI working group. Assemble a holistic team that includes not only engineers and product roles but also experts in ethics, legal, and domain-specific fields. This group should regularly review AI practices and data concerns, focusing on potential biases, ethical risks, and compliance. A diverse working group brings varied perspectives, helping to anticipate and address issues that might otherwise go unnoticed.
  6. Bring in experts. The higher your risk profile is, the more essential it is to engage legal and compliance professionals who specialize in AI to review your systems. Third-party auditors can also assess your algorithms for fairness, transparency, and bias, helping your team stay objective. Recurring external audits demonstrate a proactive approach to regulatory alignment, which can be beneficial for both compliance and public trust.
  7. Tailor to your industry’s. AI regulations are unlikely to be one-size-fits-all, and industry-specific laws are already in place for sectors like healthcare and finance. In healthcare, prioritize patient data privacy and compliance with HIPAA; in finance, ensure algorithms align with Fair Lending and Dodd-Frank standards. 

 

In short: Stay informed, build ethical AI practices into your business, and be ready to adapt when the legal landscape changes.

Looking to navigate AI’s complex landscape? Our expertise in legal tech and AI  Solutions can help you stay compliant and innovative. Contact us today to explore how we can support your goals.

Doug Gapinski

Account Director

With over a decade of experience as a team lead and project manager, Doug Gapinski is a Seattle-based Account Director managing long-term product builds with volatile scope. He advocates for quality, transparency, and a shared understanding of project constraints while applying agile methodologies across decentralized teams.