It’s Time To Regulate ‘Smart City’ Technology, Too

This isn’t just about Facebook: When Google is building cities and cars are turning into data-harvesting machines, the need for laws that protect users has never been more urgent.

From our partners:

cyberpogo

Mannequins pose with a self-driving car at a vehicle test track in Canada. Mark Blinch/Reuters

There’s a reason why one technology reporter compared wide-ranging questions posed by the Senate to Facebook CEO Mark Zuckerberg on Tuesday to “a five-hour tech support call.” The hearing revealed a basic lack of understanding about Facebook’s data-gathering business model and consumer-facing functions.

On Wednesday, day two of Zuckerberg’s congressional testimony in the wake of the Cambridge Analytica data scandal, House members seemed to have done a little more homework. But even lawmakers who started off sharp wound up leaning on Zuckerberg for advice for regulating his own company. Their tougher questions didn’t add up to a clear picture of what’s gone wrong at the social media giant.

This was not so surprising. In the U.S., the federal government has long shied away from regulating innovative new businesses and their digital products. Unlike in Europe, where consumer privacy protections in emerging technologies are often mandated by government, U.S. lawmakers tend to be more interested in protecting tech companies as an enormous source of economic activity. But as big data underpins more of society’s architecture—and as our every utterance and movement leaves data trails—the risks of this laissez-faire approach are growing. They creep beyond your web browser and into the physical streets.

Data is being gathered in virtually every mode of transportation. That means data breaches and misuse happen there too—in transit systemsairlinesride-hailing services, and even walking, biking and jogging. The risks are perhaps especially great with “connected vehicle” technology. As Facebook and Google have panoptical views of user search histories and consumer habits, vehicles linked into the internet of things gather data on location and driving habits, at both the personal and aggregated levels. Some of that data may be used for safety and crime-prevention purposes; it will also have value to advertisers as cars become roving info-tainment pods. The vulnerabilities to hackers and data abusers will almost certainly grow alongside that in-vehicle ad market.

“Widespread concerns have been raised about the lack of security controls in many IoT devices,” stated the U.S. Government Accountability Office in a May 2017 report on the Internet of Things. “[That] is in part because many vehicles, equipment, and other increasingly IoT-enabled devices were built without anticipating threats associated with Internet connectivity or the requisite security controls.”

Unforeseen threats and abuses? That sounds familiar.

This goes beyond privacy concerns related to travelers’ location and driving data. Gaining access to a car’s mechanical hardware, hackers could conceivably stop multiple vehicles in tandem, hold passengers for ransom, and manipulate vehicles to cause fatalities. Even though the connected-vehicle technology on the market now is crude compared to what the future supposedly holds, virtually every major car company has experienced a vehicle hack in some regard, Jeep being a particularly high-profile example. In 2015, researchers successfully unlocked a Jeep Cherokee, screwed with its steering, and braked it while it was in motion on a highway. The demonstration showed how cybersecurity intersects with bodily security on a very basic consumer level, and it led to mass recalls.

LEARN MORE  More Than Math - The Must-Have Non-Technical Skills for Data Pros

Some experts argue that the complex software of self-driving vehicles will make them safer from hacks than current cars. And automakers are now investing in ramped-up cybersecurity protections. But if any industry has shown the limits of the concept of self-regulation, it’s car companies. Look at the history of basic vehicle safety and environmental laws. Today’s cars kill people at about half the rate they did in the 1970s, largely because federal safety laws forced U.S. automakers (usually against their will) to re-engineer their vehicles to pass increasingly stringent crash tests and install three-point seatbelts, airbags, anti-lock brakes, and other safety equipment. The recent Volkswagen emissions deception scandal reminded us how insufficient the incentives to comply with environmental regulations are for large auto companies, and revealed the cozy relationship many of them have with the federal government.

Today, automakers and tech companies are testing driverless vehicles on public roads with virtually no federal safety laws in place and, in many areas, few state laws either. While Congress has proposed regulations governing self-driving and IoT tech, none have become law (though the House has made progress).“There’s a fear of being seen as anti-innovation,” said Greg Rodriguez, a Washington, D.C.-based lawyer who works on emerging transportation technologies. Only in California are autonomous vehicle developers mandated to report safety data; at the National Highway Traffic Safety Administration, it’s voluntary.

That is part of the reason why many in the AV industry said they predicted the fatal self-driving Uber crash in regulation-free Arizona last month. That incident, in which the vehicle struck and killed a pedestrian, and a fatal Tesla crash shortly afterward, in which a car on Autopilot struck a freeway divider, both “indicate that more government regulation is needed in this space,” said Missy Cummings, a mechanical engineering professor at Duke University and the director of Duke’s Humans and Autonomy Laboratory. (In the latter incident, Tesla has placed blame on the driver.)

Right now, there isn’t sufficient test data available from developers to evaluate what safety requirements should be, Cummings said. If it’s too soon to impose strict safety laws on the developing technology, Bryan Reimer, a research scientist and associate director of The New England University Transportation Center at MIT, said that the industry, regulators, and academia need to come together to talk about a framework for best practices at a minimum. But those conversations aren’t happening either, he said.GSuite

Clearly, there are differences between safety regulations that protect life and limb and data privacy issues that imperil only your credit card number and browser history. But increasingly these issues overlap. And on both fronts, companies have historically proven inadequate to the task of self-regulating without the threat of failing to meet laws that hit their bottom line.

LEARN MORE  Commute, Property Prices, And Good Conversation

The question of regulating connected and autonomous cars is above all about safety: We want to keep these machines from driving into walls and killing people. Regulating Facebook is about a different kind of safety: We want to protect democratic institutions in the face of a powerful data-gathering platform. Where these questions converge is in the freedom both the automotive and information-technology industries have had from a Congress that is averse to regulating—or even understanding—new technologies of all kinds.

No wonder so many academic researchers are clamoring for a larger role in pooling and studying Facebook data. The same call is being sounded by academic researchers and policy experts in the world of autonomous vehicles.

That isn’t to let Congress off the hook. Politicians don’t need to be subject-matter experts in the sectors they regulate. Following the financial crisis in 2009, federal lawmakers were not all fully versed in the nuances of mortgage-backed securities when Wall Street executives took the hot seat, writes Kevin Roose in The New York Times. Nor are they experts in pharmaceuticals, or aviation, or auto emissions. “And yet, Congress—with the help of staff experts and outside advisers—has managed to pass sweeping legislation to prevent excesses and bad behavior in those sectors,” Roose writes. The first step might be to figure out what, exactly, Facebook is doing wrong.

The same applies to connected vehicles and the other blinking, beeping, listening devices that spangle 21st-century life—devices who at least in some part depend on a business model that commercializes personal data. It will also apply to top-down “smart cities” projects, such as Alphabet’s Sidewalk Labs development in Toronto, which proposes to build an entire neighborhood “from the internet up.”

“This is not some random activity from our perspective,” Alphabet chairman Eric Schmidt said of the project when it was announced in 2017. “This is the culmination, from our side, of almost 10 years of thinking about how technology can improve people’s lives.” It could be very difficult to opt out.

Of course, opting in is the ticket to 21st-century life. The benefits of the technologies that more rapidly connect us—socially and physically, in our homes and across oceans—are enormous. They’re time-saving, equalizing, and can potentially save lives. But in a time when Alphabet is building cities, cars are turning into movie theaters, and Facebook has more “citizens” than any government jurisdiction in the world, the purposes and implications of technology are broader and blurrier than ever. So is the task of making the laws that can keep it safe and working for everybody. And it’s probably more important than ever.

 

This feature is written by Laura Bliss & originally appeared in CityLab.

 

GSuite
Freebitcoin



For enquiries, product placements, sponsorships, and collaborations, connect with us at [email protected]. We'd love to hear from you!



Our humans need coffee too! Your support is highly appreciated, thank you!
Total
0
Shares
Previous Article

People Who Live In Diverse Neighbourhoods Are More Helpful – Here’s How We Know

Next Article

Robot Cities: Three Urban Prototypes For Future Living

Related Posts
Total
0
Share