By Jay Stanley, Senior Policy Analyst, ACLU Speech, Privacy, and Technology Project
 

When the police keep records of the times and places where our cars have been spotted by an automatic license plate reader, does that implicate our privacy? That question lay behind an important victory that the ACLU won on Thursday in the Virginia Supreme Court.

It might seem obvious that license plate data threatens our privacy. Automatic license plate readers (ALPRs) have the ability to capture location data that can reveal details about Americans’ religious, sexual, political, medical, and associational activities. Yet we have seen repeated efforts around the country to claim that license place information does not constitute “personally identifiable information” and so is not worthy of the kind of privacy protection that such information deserves.

The Virginia case involved a challenge to the collection of ALPR data under a Virginia privacy law called the “Government Data Collection and Dissemination Act,” which says that personal information “shall not be collected” by state agencies “unless the need for it has been clearly established in advance.” The Virginia court ruled that police license plate data collection is not exempt from the requirements of this law.

In 2013, the (conservative) former Virginia attorney general Ken Cuccinelli issued a strong official advisory opinion stating that under the law, “active” use of an ALPR to identify vehicles that were already involved in wrongdoing was permissible, but that “passive” use of the devices to sweep up raw location data about Virginia residents was not permissible. Nevertheless, in 2014 Harrison Neal, a Virginia resident, found that the Fairfax County Police Department was doing just that, and asked the ACLU of Virginia to file suit challenging the practice on his behalf.

One of the arguments that the Fairfax County police made in its defense was that a “license plate number is not personal information.” The department argued that a license plate tag is associated with a vehicle, not with a person, and a vehicle can be driven by multiple people. A plate number, it argued, “says absolutely nothing about an individual, his personal characteristics… or his membership in an organization.” It argued that its ALPR equipment does not “photograph or otherwise identify the owner or driver of the vehicle,” and that its ALPR database can only be searched by plate number.

But obviously a license plate number should be considered personally identifying information. While in some ridiculously literal sense it may not describe a person, it functions as an effective “index” that allows law enforcement to learn facts (and potentially very intimate ones) about a person with the click of a mouse. There are 268 million registered vehicles in the United States, but only 221 million licensed drivers, which strongly supports the common sense observation that most vehicles, most of the time, are driven by the same person, or at most a handful of family members. Even if there is some fuzziness at the margins about who exactly may have been driving a car in a particular instance, there is a strong assumption that it was the vehicle’s registered owner. Most people do not loan out their cars often, if ever. Even when it comes to corporate vehicle fleets, a plate number and time of day combined with a fleet owner’s records is probably enough to identify a driver.

Nor does the police department’s argument hold any weight that the ALPR database itself does not contain names or other personally identifiable information. The department pretends that databases exist in isolation, rather than being distributed across increasingly cross-referenced and interlinked sets of data. In fact, the Virginia Supreme Court could not find information in the record about how the police can link plate numbers with vehicle owners, so in its ruling it sent the case back to the lower court for fact-finding on that question. But is there any doubt that if the police want to identify the owner of a vehicle captured by an ALPR device, they can easily do so?

Virginia’s data act defines “personal information” as including “all information that… affords a basis for inferring personal characteristics.” It’s becoming easier and easier to connect separate sets of data about people, and data scientists are getting increasingly good at inferring things through fancy analytic techniques. So it’s good that Virginia’s privacy law includes that language, because hidden inferences will likely become an ever-growing threat to privacy.  

But no sophisticated analytics are necessary to cross-reference “vehicle location data” and “registered vehicle owner” datasets, or to understand that license plate location data is “personally identifiable” and a threat to privacy. The fact that the Fairfax County police made these arguments is a reminder that law enforcement and security agencies will push the interpretive flexibility of language past its limits when they want to preserve their power — something we've also seen with the National Security Agency. Drafters of privacy-protecting rules everywhere, take notice.

Date

Thursday, April 26, 2018 - 4:15pm

Show featured image

Hide banner image

Tweet Text

[node:title]

Show related content

Imported from National NID

68092

Menu parent dynamic listing

926

Imported from National VID

109792

Style

Standard with sidebar
By Jacob J. Hutt, William J. Brennan Fellow, ACLU Speech, Privacy, and Technology Project
 

Social media companies are under tremendous pressure to police their platforms. National security officials press for takedowns of “terrorist content,” parents call for removal of “startling videos” masquerading as content for kids, and users lobby for more aggressive approaches to hateful or abusive content.

So it’s not surprising that YouTube’s first-ever Community Guidelines Enforcement Report, released this week, boasts that 8,284,039 videos were removed in the last quarter of 2017, thanks to a “combination of people and technology” that flag content that violates YouTube policies.

But the report raises more questions about YouTube’s removal policies than it answers, particularly with regard to the use of machine-learning algorithms that flag and remove content because they detect, for example, “pornography, incitement to violence, harassment, or hate speech.”

Content flagging and removal policies are increasingly consequential. Because so much speech has migrated onto major social platforms, the decisions those platforms make about limiting content have huge implications for freedom of expression worldwide. The platforms, as private companies, are not constrained by the First Amendment, but they have a unique and growing role in upholding free speech as a value as well as a right.

YouTube’s new report, while an important step toward greater transparency, doesn’t resolve those concerns. First, while it assures that a human reviews content flagged by artificial intelligence, it neither describes the standards for this review process nor reveals how frequently human reviewers reject the machine’s initial flag. This is especially concerning for content flagged as “violent extremist content.” In the last quarter of 2017, a staggering 98 percent of content removed for reflecting violent extremism was flagged by machine, which raises the concern that YouTube may be relying almost exclusively on automated tools to flag content in the first instance. Does YouTube have a robust system in place for determining when algorithmically identified “violent extremist content” actually features violence or incitement to violence? Or does “human review” mean rubber stamping what the machines have labeled terrorist propaganda?

Deciding what constitutes “extremism” is notoriously fraught — under the best of circumstances, it is subjective, political, and context-dependent. The obvious danger is that efforts to police “extremist” content will be arbitrary, will discriminate against minorities or those expressing unpopular views, or will sweep in reporting or commentary that is critical to public discourse. Apart from the difficulty of defining such a complex category, can an algorithm distinguish violent extremist content from commentary criticizing it? These concerns underscore why platform transparency is so important. A more robust accounting of YouTube’s practices would tell the public how frequently machine-flagged videos end up removed for each type of prohibited content. It would also disclose YouTube’s standards for defining categories like “violent extremist content.” Facebook has recently taken the step of disclosing the rules it applies in removing content, and YouTube should do the same.

YouTube’s transparency report raises other questions about the role of machine learning in content takedowns. In what circumstances do machines automatically remove content without any human review? Though the report emphasizes human review of flagged content, YouTube’s explainer video, “The Life of a Flag,” suggests otherwise:

We’ve developed powerful machine learning that detects content that may violate our policies and sends it for human review. In some cases, that same machine learning automatically takes an action, like removing spam videos.

Under what circumstances does YouTube’s machine-learning algorithm automatically remove videos flagged as potentially inappropriate? And how many videos have been removed without a human ever having reviewed them? We know that YouTube (via Google) partners with the Internet Watch Foundation, which identifies known child pornography images and gives them distinct “digital fingerprints,” or hashes. Social media companies then use the hashes to prevent the images from being posted. YouTube and others are adapting that approach to preempt the posting or sharing of violent extremist content. Setting aside the numerous questions about how content is deemed extremist and is selected for the hash-sharing effort, might YouTube be using other methods to automatically remove non-hashed content that has successfully been uploaded? The explainer video does not explain.

Lastly, the report does not grapple with a critical question underlying the platforms’ broader shift to machine learning. If machines are learning from human decisions, how are the companies ensuring that the machines do not reproduce, or even exacerbate, human biases? Whether in the context of predictive policing or the distribution of Medicaid benefits, we’ve consistently cautioned against relying too eagerly on machine learning, which may simply aggregate our biases and mechanize them. That risk seems particularly acute in the context of “violent extremism,” where human biases run deep. How is YouTube ensuring that its potent technology is not engaging in the same racial or religious profiling it may have learned from human reviewers?

WILL ARTIFICIAL INTELLIGENCE MAKE US LESS FREE?

There are no easy solutions. Companies like YouTube face government and public pressure to shut down content or be shut down themselves. Some companies are trying to develop nuanced ways to address the issue. Facebook, for instance, announced this week that it would implement an appeals process for removed content and released its internal guidelines for making content determinations. These are important changes, even if they don’t go far enough.

YouTube should clarify exactly how its takedown mechanisms work. Otherwise, we have no way to ensure the machines aren’t going too far.

Date

Thursday, April 26, 2018 - 4:00pm

Show featured image

Hide banner image

Tweet Text

[node:title]

Show related content

Imported from National NID

68083

Menu parent dynamic listing

926

Imported from National VID

109767

Style

Standard with sidebar
By Bill Cobb, Deputy Director, ACLU Campaign for Smart Justice
 

“Cobb C-L-3-6-9-1, let’s go. Get your shit. You're outta here.”

While I am sure that this was not the last thing said to me before I walked past the gun tower and through the barbed wire fence at the Pennsylvania State Correctional Institution at Camp Hill on January 12, 2000, it is certainly the last thing I clearly remember hearing upon my departure from prison.

A friend of mine was waiting for me in the prison’s parking lot. I got in the car, and we drove silently for several hours towards Philadelphia where I was mandated to report to a halfway house, Community Corrections Center, for a minimum of 90 days.

I was out of prison, but the laundry list of tasks to complete prior to being eligible for release from the halfway house gave me a whole new set of items to worry about. I needed to find employment, complete a drug assessment program, and find suitable housing, which had to be preapproved by a parole agent I had yet to be assigned.

After serving 6 ½ years in Pennsylvania state prisons, I was now a “returning citizen.”

Today, I am more than 18 years removed from prison. But I am still a “returning” citizen, like millions upon millions of others. It’s estimated that more than 9 million people return home from jails each year in America. Additionally, more than 600,000 people are reportedly released from federal and state prisons annually.

 

mytubethumbplay
%3Ciframe%20allow%3D%22autoplay%3B%20encrypted-media%22%20allowfullscreen%3D%22%22%20frameborder%3D%220%22%20height%3D%22315%22%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2FvnKv5xBQBLU%3Fautoplay%3D1%26version%3D3%22%20...
Privacy statement. This embed will serve content from youtube.com.


Today I am gainfully employed, living in stable housing, providing for my family, serving communities across the nation through civic and political engagement, and paying my fair share in taxes. Nevertheless, I am still subject to punishments, exclusions, and restrictions that impair my ability to pursue the American dream unencumbered. I may be thriving at life, but the threat of losing it all is ever present — and uniquely stressful.

My greatest fear is the loss of employment and income, which would force me to reenter the job market where I would once again have to contend with the pervasive culture of discrimination aimed at people living with arrest and convictions. After being tasked with studying the issue by Congress, the American Bar Association in 2015 identified over 45,000 federal and state statutes and regulations that impose “collateral consequences” on people convicted of crimes. While I was overwhelmed by the sheer number of such policies in place, I was all too familiar with their devastating impact on the lives of people who have paid their debts to society.

I had zero plans of becoming an advocate, activist, or organizer upon my release in 2000, but fate had other plans. I had the fortune of being hired and then rapidly promoted to an executive leadership position at a telemarketing company in Philadelphia. Over the year, I made the company’s hiring policies more inclusive and less discriminatory, which resulted in an explosion of hiring and growth. By centering formerly incarcerated people in leadership positions, our company became an invaluable resource to people living with arrest and convictions, as an estimated 40 percent of our nearly 700 employees had criminal records.

In 2004, the company shut down, and I found myself in search of employment, which I found easily enough. However, after just 30 or so days at my new job, I was terminated due to my criminal history, although I’d fully disclosed the past events in great detail.

Over the course of the next decade, I was hired and fired countless times. Additionally, I was denied life insurance, a car salesman license, a mortgage, and enrollment in a local university all due to my criminal record. My family suffered immeasurably, as we survived on the brink of homelessness numerous times. We depended upon the Supplemental Nutrition Assistance Program and a variety of other social programs in spite of my tireless efforts.

My experience was not an anomaly. As many as three out of four people remain unemployed a year after being released from prison, and only 12.5 percent of employers say they will accept job applications from a formerly incarcerated person. A prison sentence is not the only “debt” one has to repay society. For a lot of people, it’s a debt that can’t ever be repaid, a permanent status that we live with forever.

When I started to question why this was happening, I learned quickly that my hardships were rooted in tough-on-crime policies and a pervasive culture of discrimination. My increasing education and first-hand experiences led me to found a nonprofit organization that aimed to eliminate systemic discrimination practices targeted at people living with arrest and convictions. I focused my advocacy efforts on two separate areas: employment discrimination against those with criminal records and political organizing of the formerly incarcerated.

In Pennsylvania, individuals are eligible to vote immediately upon release. Over the years, we led voter registration and voter education drives that put issues of mass incarceration front and center during election seasons, forcing politicians to answer for the policies that discriminate against the formerly incarcerated.

Not all states, however, allow people to exercise their right to vote after their release from prison. Our inability to give people second chances extends even to basic participation in our democracy. The restrictions can be dizzying. Earlier this month, a Texas woman was sentenced to five years in prison because she voted without realizing that she was not allowed to because of her criminal record.

In the 18 years since I was released from prison, the trauma of being a permanent “returning citizen” has never fully subsided. The only thing I can do is continue to work to support the movement that aims to alleviate and reverse the trauma that millions of people across the country are burdened with because of a system that does not live up to its rhetoric about second chances. Reentry is a lifetime process, but it shouldn’t be.

Date

Wednesday, April 25, 2018 - 6:00pm

Show featured image

Hide banner image

Tweet Text

[node:title]

Show related content

Imported from National NID

68072

Menu parent dynamic listing

926

Imported from National VID

109731

Style

Standard with sidebar

Pages

Subscribe to ACLU of Nevada RSS