We’re on the verge of a truce in the struggle between human and artificial intelligence. Artificial intelligence, whether portrayed through popular sci-fi movies or simply fear of the unknown, has always featured the human brain working against the artificial, silicon mind. But as data science and AI continue to evolve, a centaur effect of sorts is ensuing, where the two sides work together to achieve outcomes that were once unimaginable. But the centaur analogy of human-machine collaboration only works if both the machine and the human are adding value, which is not always the case, explains Jay Yonamine, head of data science, global patents at Google.
For example, Google’s self-driving car originally had a manual system, complete with a steering wheel and pedals that a human driver could use to override the automated system. But after tests showed that humans intervened at the wrong times, leading to less safe outcomes, Google removed the steering wheel. “If you’re trying to leverage data science, machine learning, and automation in your company, then you need to constantly be testing what the value of the human escort is on top of the automated output,” Yonamine says. “You need to be aware of when it’s time to remove the steering wheel.”
Perhaps Yonamine has a better understanding of this conflict in the world of data science, thanks in part to his unique perspective. Long before he joined Google, Yonamine received a PhD in political science at Penn State University. His research focused on the study of political conflict. In particular, he wanted to know what caused conflicts to start, escalate, and stop.
The answers to these questions had significant ramifications for policy decisions. “If you can understand the driving factors, then maybe you can work to prevent conflicts, shorten them, and reduce the intensity,” Yonamine says.
Yonamine and others in the field primarily based their research on other political scientists’ theories, which were hard to definitively prove and flourished or failed based on the strengths of their arguments. So, Yonamine turned to a more objective source of understanding: data science.
“People have had theories about causes of political violence for millennia,” he says. “It seemed likely that data-driven, machine-learning approaches would be able to outperform classical theories in predicting political conflict if we were able to use rigorous tests of predictive accuracy.”
After earning his degree, Yonamine decided not to enter academia but rather to pursue data science. He worked in insurance at Allstate, the patent-solutions company RPX Corporation, and the cognitive-computing company Bottlenose before joining Google in 2015. He was drawn to Google’s patent function in part because of the parallels he found between aspects of the legal industry and political science.
As in political science, decisions in the legal field are often based on whose argument is the most convincing or well-established, as opposed to focusing on whichever approach led to the best outcome. “There are a lot of decisions being made and a lot of actions pursued, but it isn’t always clear how to determine if that was the correct action or the correct strategy because there isn’t always objective evaluation criteria,” Yonamine says. “I saw a tremendous opportunity to bring data science to the legal field and to patents in particular.”
To improve outcomes in Google’s patent department, Yonamine first leverages automation. “There is a tremendous amount of work in patents and legal that is not high-value-add work and is not work that requires legal expertise,” he says. “Automating that frees up time for people on our team and our legal experts to use more of their brain power on the tasks that require it, with the expectation being that you’ll have higher job satisfaction and you’ll have better outcomes on that high-quality work.”
Automation might improve efficiency, but that improvement can come at the expense of human jobs. Many leaders counter employees’ fears that new technologies will automate away jobs by pointing to the centaur model that Yonamine describes, in which a person leverages tools such as artificial intelligence to achieve greater outcomes. To further illustrate that, Yonamine employs robust and transparent data science tools that give users the information and support they need to make the best decisions, allowing them to work with the technology instead of against it. Although his colleagues are experts in their fields, they are not always well-versed in data tools. Yonamine dedicates much of his work to educating his colleagues on how to leverage new technologies and responding to wariness around automation and opaque algorithms.
“What’s important is making sure of the degree of clarity in the algorithm,” he says. “It’s important to make sure that the level of transparency required is in line with how the algorithm is being used.”
Among the tools Yonamine’s team uses are Tableau dashboards, which allow users to easily and consistently access data. “We find it really important to make sure we build dashboards that provide visualizations and analytics that are universally accessible to everyone on the team at all times; to make sure that the data is consistent, so that people are all going to the same source of truth; and to make sure that when three different people have the same question, that they all get the same answer,” Yonamine says.
Certain statistics, such as how many patents Google currently owns, are immutable, but data can provide opportunities as well as answers. Team members have access to raw data, which they can use to brainstorm, explore, and test new methods. “We try to check that balance of giving people easy access to data, supporting creativity, and thinking outside the box, while also making sure that, for day-to-day, mission critical tasks, people are using the data consistently,” Yonamine says.
In its best form, data technology creates not only a more efficient workflow and better-quality outcomes, but also a more meritocratic system. When a decision is reached through objective evaluation as opposed to subjective argument, human bias is less likely to influence the outcome. “When the quality of decision-making is based on who can make the most articulate argument, then you get things like cliques and who’s friends with whom,” Yonamine says. “There’s no objectivity to establish who’s doing better, so you have to fall back to subjective things. It also becomes much easier to create artificial barriers to prevent certain groups of folks from gaining entry.”
But the presence of automation or analytics does not sufficiently demonstrate a tool’s value. As the legal technology space continues to grow, Yonamine continues to face the challenge of implementing data-science approaches to add measurable value to Google’s patent process. His team constantly reevaluates each approach’s return on investment. To find the most effective solutions, they combine the best that machines and humans have to offer. They use both measured experiments and a devil’s advocate approach, in which one team member argues that the process in question does not add value.
“Every time there’s some complicated or fancy machine-learning or data-science tool, there should always be someone taking that position that it’s not adding value. And only when you can definitively disprove that position is there actually value,” Yonamine says. “If you can’t express the ROI of the data science you’re doing in a very crisp elevator pitch, you might want to rethink how you structure that project.”
Darts-ip would like to recognize Jay Yonamine for his achievements in data science and patents. Darts-ip values its relationship with Google and shares its mission to “organize the world’s information and make it universally accessible and useful” by offering a global platform of over 3m patent and trademark cases. (www.darts-ip.com)