Google CEO eyes major opportunity in healthcare, says will protect privacy

Joanna Estrada
January 24, 2020

The artificial intelligence technology being developed by companies including Google needs to be regulated... thinks the CEO of Google (and now also its parent company Alphabet) Sundar Pichai. "It has tremendous positive sides to it, but it has real negative consequences", he said.

Speaking at the WEF 2020, Pichai also said privacy can not be a luxury good and it has to be safeguarded for everyone.

"Sensible regulation must also take a proportionate approach, balancing potential harms, especially in high-risk areas, with social opportunities", he added.

"International alignment will be critical to making global standards work".

In many cases it is not necessary to start from scratch, he added, citing existing rules such as Europe's General Data Protection Regulation which "serve as a strong foundation". The acquisition is expected to be scrutinized closely by regulators before it is allowed to close.

Pichai said he has always been a technology optimist, having experienced firsthand the benefits of new technology brought to his life. He said governance can not come from Google or market forces, and called on countries to come together to develop a set of standards.

Pichai is the latest to join the bandwagon calling for AI regulations.

Speaking earlier this week, Mr Pichai called for worldwide cooperation on regulating artificial intelligence technology to ensure it is 'harnessed for good.

Up until now, Alphabet reached $136.8 billion in revenue in 2018 largely through Google, which accounted for 99% of all sales. "They also specify areas where we will not design or deploy AI, such as to support mass surveillance or violate human rights". The company launched its own independent ethics board in 2019, but shut it down less than two weeks later following controversy over its composition.

About addressing the challenges of regulating AI, Pichai said: "AI is no different from climate - you can't get safety by just one company or country working on it - you need a global framework".

These are lessons that teach us "we need to be clear-eyed about what could go wrong" in the development of AI-based technologies, he said.

In November previous year, billionaire tech entrepreneur Elon Musk warned that AI is a potential danger to the public and said governments should step in quickly to manage the risks.

"When I look at the future, quantum will be the next big arsenal in the way of technology".

Other reports by Click Lancashire

Discuss This Article