Legal Decisions are Being Codified and the Models are Perpetuating Historical Biases | Episode 1.14
𝗣𝗮𝘁𝗿𝗶𝗰𝗸 𝗞. 𝗟𝗶𝗻 is a lawyer and researcher focused on AI, privacy, and technology regulation. He is the author of 𝘔𝘢𝘤𝘩𝘪𝘯𝘦 𝘚𝘦𝘦, 𝘔𝘢𝘤𝘩𝘪𝘯𝘦 𝘋𝘰, a book that explores the ways public institutions use technology to surveil, police, and make decisions about the public, as well as the historical biases that impact that technology.Patrick has extensive experience in litigation and policy, having worked for the ACLU, FTC, EFF, and other organizations that advocate for digital rights and social justice. He is passionate about addressing the ethical and legal challenges posed by emerging technologies, especially in the areas of surveillance, algorithmic bias, and data privacy. He has also published articles and papers on facial recognition, data protection, and copyright law.This podcast episode covers some of the many crazy topics Lin dives into throughout his book. Some of which include the following discussions:Robert Moses would often quote the saying “Legislation can always be changed. It’s very hard to tear down a bridge once it’s up.” Unsurprisingly then, Moses had a lot of influence in shaping the physical layout and infrastructure of New York City and its surrounding suburbs (i.e., hundreds of miles of road, Central Park Zoo, United Nations (UN) Headquarters, Lincoln Center, and more). Today, the digital landscape is similarly being built on a foundation of bias.Can history be biased? How do we codify bias and build legal models that perpetuate discrimination in policy?It is important to understand what a model outputs and what inputs are considered in the overall assessment. Algorithms like COMPAS, which is used in the police system, consider variables such as education, which is indirectly classist, as education is a proxy for wealth. (120)The government uses surveillance technology disproportionately to target immigrant communities; and the deployment of new systems and technologies are usually tested on immigrants first. This is yet another example of how those most influenced are those who are already most marginalized.Bias is present throughout all stages of policing – from the criminal trial case (where judges use biased algorithms to validate their already biased perspectives, i.e., confirmation bias), to the recidivism assessment process (i.e., models like the aforementioned COMPAS), to cash bail, and many others.Generative AI uses nonconsensual pornography in its training data. How can we mitigate such breaches of privacy?Intellectual property and copyright law play an interesting role and work in the best interest of the AI Industry, which is incentivized to keep the space unregulated.Overrepresentation is an indicator of discriminatory purposes in a model’s training data. What we can to do hedge for such bias in an algorithm’s early phases?#AlgorithmicBias #PredatoryTech #TechnicallyBiasedPodcast #Gakovii