Synthetic intelligence holds an unlimited promise, however to be efficient, it should study from huge units of information—and the extra numerous the higher. By studying patterns, AI instruments can uncover insights and assist decision-making not simply in expertise, but additionally prescription drugs, drugs, manufacturing, and extra. Nonetheless, knowledge can’t at all times be shared—whether or not it’s personally identifiable, holds proprietary data, or to take action can be a safety concern—till now.

“It’s going to be a brand new age.” Says Dr. Eng Lim Goh, senior vp and CTO of synthetic intelligence at Hewlett Packard Enterprise. “The world will shift from one the place you could have centralized knowledge, what we have been used to for many years, to at least one the place you must be comfy with knowledge being in every single place.”

Information in every single place means the sting, the place every system, server, and cloud occasion gather huge quantities of information. One estimate has the variety of linked units on the edge rising to 50 billion by 2022. The conundrum: learn how to hold collected knowledge safe but additionally have the ability to share learnings from the info, which, in flip, helps train AI to be smarter. Enter swarm studying.

Swarm studying, or swarm intelligence, is how swarms of bees or birds transfer in response to their setting. When utilized to knowledge Goh explains, there may be “extra peer-to-peer communications, extra peer-to-peer collaboration, extra peer-to-peer studying.” And Goh continues, “That is the rationale why swarm studying will change into increasingly vital as …as the middle of gravity shifts” from centralized to decentralized knowledge.

Contemplate this instance, says Goh. “A hospital trains their machine studying fashions on chest X-rays and sees lots of tuberculosis circumstances, however little or no of lung collapsed circumstances. So due to this fact, this neural community mannequin, when skilled, can be very delicate to what’s detecting tuberculosis and fewer delicate in direction of detecting lung collapse.” Goh continues, “Nonetheless, we get the converse of it in one other hospital. So what you really need is to have these two hospitals mix their knowledge in order that the ensuing neural community mannequin can predict each conditions higher. However since you possibly can’t share that knowledge, swarm studying is available in to assist cut back that bias of each the hospitals.”

And this implies, “every hospital is ready to predict outcomes, with accuracy and with diminished bias, as if you could have collected all of the affected person knowledge globally in a single place and discovered from it,” says Goh.

And it’s not simply hospital and affected person knowledge that should be stored safe. Goh emphasizes “What swarm studying does is to attempt to keep away from that sharing of information, or completely forestall the sharing of information, to [a model] the place you solely share the insights, you share the learnings. And that is why it’s essentially safer.”

Present notes and hyperlinks:

Full transcript:

Laurel Ruma: From MIT Know-how Evaluation, I am Laurel Ruma. And that is Enterprise Lab, the present that helps enterprise leaders make sense of recent applied sciences popping out of the lab and into {the marketplace}. Our subject right this moment is decentralized knowledge. Whether or not it is from units, sensors, vehicles, the sting, if you’ll, the quantity of information collected is rising. It may be private and it should be protected. However is there a solution to share insights and algorithms securely to assist different corporations and organizations and even vaccine researchers?

Two phrases for you: swarm studying.

My visitor is Dr. Eng Lim Goh, who’s the senior vp and CTO of synthetic intelligence at Hewlett Packard Enterprise. Previous to this position, he was CTO for a majority of his 27 years at Silicon Graphics, now an HPE firm. Dr. Goh was awarded NASA’s Distinctive Know-how Achievement Medal for his work on AI within the Worldwide House Station. He has additionally labored on quite a few synthetic intelligence analysis tasks from F1 racing, to poker bots, to mind simulations. Dr. Goh holds quite a few patents and had a publication land on the quilt of Nature. This episode of Enterprise Lab is produced in affiliation with Hewlett Packard Enterprise. Welcome Dr. Goh.

Dr. Eng Lim Goh: Thanks for having me.

Laurel: So, we have began a brand new decade with a worldwide pandemic. The urgency of discovering a vaccine has allowed for better data sharing between researchers, governments and firms. For instance, the World Well being Group made the Pfizer vaccine’s mRNA sequence public to assist researchers. How are you eager about alternatives like this popping out of the pandemic?

Eng Lim: In science and drugs and others, sharing of findings is a crucial a part of advancing science. So the normal means is publications. The factor is, in a yr, yr and a half, of covid-19, there was a surge of publications associated to covid-19. One aggregator had, for instance, the order of 300,000 of such paperwork associated to covid-19 on the market. It will get tough, due to the quantity of information, to have the ability to get what you want.

So quite a few corporations, organizations, began to construct these pure language processing instruments, AI instruments, to assist you to ask very particular questions, not simply seek for key phrases, however very particular questions in an effort to get the reply that you simply want from this corpus of paperwork on the market. A scientist might ask, or a researcher might ask, what’s the binding power of the SARS-CoV-2 spike protein to our ACE-2 receptor? And will be much more particular and saying, I need it in models of kcal per mol. And the system would undergo. The NLP system would undergo this corpus of paperwork and provide you with a solution particular to that query, and even level to the realm of the paperwork, the place the reply could possibly be. So that is one space. To assist with sharing, you possibly can construct AI instruments to assist undergo this monumental quantity of information that has been generated.

The opposite space of sharing is sharing of a scientific trial knowledge, as you could have talked about. Early final yr, earlier than any of the SARS-CoV-2 vaccine scientific trials had began, we got the yellow fever vaccine scientific trial knowledge. And much more particularly, the gene expression knowledge from the volunteers of the scientific trial. And one of many objectives is, are you able to analyze the tens of 1000’s of those genes being expressed by the volunteers and assist predict, for every volunteer, whether or not she or he would get side-effects from this vaccine, and whether or not she or he will give good antibody response to this vaccine? So constructing predictive instruments by sharing this scientific trial knowledge, albeit anonymized and in a restricted means.

Laurel: After we speak about pure language processing, I feel the 2 takeaways that we have taken from that very particular instance are, you possibly can construct higher AI instruments to assist the researchers. After which additionally, it helps construct predictive instruments and fashions.

Eng Lim: Sure, completely.

Laurel: So, as a particular instance of what you have been engaged on for the previous yr, Nature Journal just lately printed an article about how a collaborative strategy to knowledge insights will help these stakeholders, particularly throughout a pandemic. What did you discover out throughout that work?

Eng Lim: Sure. That is associated, once more, to the sharing level you led to, learn how to share studying in order that the neighborhood can advance quicker. The Nature publication you talked about, the title of it’s “Swarm Studying [for Decentralized and Confidential Clinical Machine Learning]”. Let’s use the hospital instance. There may be this hospital, and it sees its sufferers, the hospital’s sufferers, of a sure demographic. And if it desires to construct a machine studying mannequin to foretell based mostly on affected person knowledge, say for instance a affected person’s CT scan knowledge, to attempt to predict sure outcomes. The problem with studying in isolation like that is, you begin to evolve fashions by this studying of your affected person knowledge biased to what is the demographics you’re seeing. Or in different methods, biased in direction of the kind of medical units you could have.

The answer to that is to gather knowledge from completely different hospitals, perhaps from completely different areas and even completely different international locations. After which mix all these hospitals’ knowledge after which practice the machine studying mannequin on the mixed knowledge. The problem with that is that privateness of affected person knowledge prevents you from sharing that knowledge. Swarm studying is available in to attempt to resolve this, in two methods. One, as a substitute of amassing knowledge from these completely different hospitals, we enable every hospital to coach their machine studying mannequin on their very own non-public affected person knowledge. After which often, a blockchain is available in. That is the second means. A blockchain is available in and collects all of the learnings. I emphasize. The learnings, and never the affected person knowledge. Accumulate solely the learnings and mix it with the learnings from different hospitals in different areas and different international locations, common them after which ship again all the way down to all of the hospitals, the up to date globally mixed averaged learnings.

And by learnings I imply the parameters, for instance, of the neural community weights. The parameters that are the neural community weights within the machine studying mannequin. So on this case, no affected person knowledge ever leaves a person hospital. What leaves the hospital is simply the learnings, the parameters or the neural community weights. And so, if you despatched up your regionally discovered parameters, and what you get again from the blockchain is the worldwide averaged parameters. And then you definitely replace your mannequin with the worldwide common, and then you definitely keep it up studying regionally once more. After a number of cycles of those sharing of learnings, we have examined it, every hospital is ready to predict, with accuracy and with diminished bias, as if you could have collected all of the affected person knowledge globally in a single place, and discovered from it.

Laurel: And the rationale that blockchain is used is as a result of it’s really a safe connection between numerous, on this case, machines, appropriate?

Eng Lim: There are two causes, sure, why we use blockchain. The primary purpose is the safety of it. And quantity two, we are able to hold that data non-public as a result of, in a non-public blockchain, solely members, primary members or licensed members, are allowed on this blockchain. Now, even when the blockchain is compromised, what is simply seen are the weights or the parameters of the learnings, not the non-public affected person knowledge, as a result of the non-public affected person knowledge shouldn’t be within the blockchain.

And the second purpose for utilizing a blockchain, it’s versus having a central custodian that does the gathering of the parameters, of the learnings. As a result of when you appoint a custodian, an entity, that collects all these learnings, if one of many hospitals turns into that custodian, then you could have a state of affairs the place that appointed custodian has extra data than the remainder, or has extra functionality than the remainder. Not a lot extra data, however extra functionality than the remainder. So so as to have a extra equitable sharing, we use a blockchain. And within the blockchain system, what it does is that randomly appoints one of many members because the collector, because the chief, to gather the parameters, common it and ship it again down. And within the subsequent cycle, randomly, one other participant is appointed.

Laurel: So, there’s two attention-grabbing factors right here. One is, this undertaking succeeds as a result of you aren’t utilizing solely your personal knowledge. You’re allowed to choose into this relationship to make use of the learnings from different researchers’ knowledge as properly. In order that reduces bias. In order that’s one sort of giant downside solved. However then additionally this different attention-grabbing concern of fairness and the way even algorithms can maybe be much less equitable sometimes. However when you could have an deliberately random algorithm within the blockchain assigning management for the gathering of the learnings from every entity, that helps strip out any sort of doable bias as properly, appropriate?

Eng Lim: Sure, sure, sure. Good abstract, Laurel. So there’s the primary bias, which is, if you’re studying in isolation, the hospital is studying, a neural community mannequin, or a machine studying mannequin, extra usually, of a hospital is studying in isolation solely on their very own non-public affected person knowledge, they are going to be naturally biased in direction of the demographics they’re seeing. For instance, we now have an instance the place a hospital trains their machine studying fashions on chest x-rays and sees lots of tuberculosis circumstances. However little or no of lung collapsed circumstances. So due to this fact, this neural community mannequin, when skilled, can be very delicate to what’s detecting tuberculosis and fewer delicate in direction of detecting lung collapse, for instance. Nonetheless, we get the converse of it in one other hospital. So what you really need is to have these two hospitals mix their knowledge in order that the ensuing neural community mannequin can predict each conditions higher. However since you possibly can’t share that knowledge, swarm studying is available in to assist cut back that bias of each the hospitals.

Laurel: All proper. So we now have an unlimited quantity of information. And it retains rising exponentially as the sting, which is actually any knowledge producing system, system or sensor, expands. So how is decentralized knowledge altering the best way corporations want to consider knowledge?

Eng Lim: Oh, that is a profound query. There may be one estimate that claims that by subsequent yr, by the yr 2022, there can be 50 billion linked units on the edge. And that is rising quick. And we’re coming to some extent that we now have a mean of about 10 linked units doubtlessly amassing knowledge, per individual, on this world. On condition that state of affairs, the middle of gravity will shift from the info heart being the principle location producing knowledge to at least one the place the middle of gravity can be on the edge when it comes to the place knowledge is generated. And this can change dynamics tremendously for enterprises. You’ll due to this fact see the necessity for these units which might be on the market the place this monumental quantity of information generated on the edge with a lot of those units on the market that you will attain a degree the place you can not afford to backhaul or carry again all that knowledge to the cloud or knowledge heart anymore.

Even with 5G, 6G and so forth. The expansion of information will outstrip that, will far exceed that of the expansion in bandwidth of those new telecommunication capabilities. As such, you may attain a degree the place you haven’t any selection however to push the intelligence to the sting in an effort to resolve what knowledge to maneuver again to the cloud or knowledge heart. So it will be a brand new age. The world will shift from one the place you could have centralized knowledge, what we have been used to for many years, to at least one the place you must be comfy with knowledge being in every single place. And when that is the case, you have to do extra peer-to-peer communications, extra peer-to-peer collaboration, extra peer-to-peer studying.

And that is the rationale why swarm studying will change into increasingly vital as this progresses, as the middle of gravity shifts on the market from one the place knowledge is centralized, to at least one the place knowledge is in every single place.

Laurel: Might you discuss a little bit bit extra about how swarm intelligence is safe by design? In different phrases, it permits corporations to share insights from knowledge learnings with exterior enterprises, and even inside teams in an organization, however then they do not really share the precise knowledge?

Eng Lim: Sure. Essentially, after we wish to study from one another, a technique is, we share the info so that every of us can study from one another. What swarm studying does is to attempt to keep away from that sharing of information, or completely forestall the sharing of information, to [a model] the place you solely share the insights, you share the learnings. And that is why it’s essentially safer, utilizing this strategy, the place knowledge stays non-public within the location and by no means leaves that non-public entity. What leaves that non-public entity are solely the learnings. And on this case, the neural community weights or the parameters of these learnings.

Now, there are people who find themselves researching the power to infer the info from the learnings, it’s nonetheless in analysis section, however we’re ready if it ever works. And that’s, within the blockchain, we do homomorphic encryption of the weights, of the parameters, of the learnings. By homomorphic, we imply when the appointed chief collects all these weights after which averages them, you possibly can common them within the encrypted type in order that if somebody intercepts the blockchain, they see encrypted learnings. They do not see the learnings themselves. However we have not carried out that but, as a result of we do not see it vital but till such time we see that having the ability to reverse engineer the info from the learnings turns into possible.

Laurel: And so, after we take into consideration rising guidelines and laws surrounding knowledge, like GDPR and California’s CCPA, there must be some kind of answer to privateness considerations. Do you see swarm studying as a type of doable choices as corporations develop the quantity of information they’ve?

Eng Lim: Sure, as an choice. First, if there’s a want for edge units to study from one another, swarm studying is there, is beneficial for it. And quantity two, as you’re studying, you don’t want the info from every entity or participant in swarm studying to go away that entity. It ought to solely keep the place it’s. And what leaves is simply the parameters and the learnings. You see that not simply in a hospital state of affairs, however you see that in finance. Bank card corporations, for instance, after all, would not wish to share their buyer knowledge with one other competitor bank card firm. However they know that the learnings of the machine studying fashions regionally shouldn’t be as delicate to fraud knowledge as a result of they don’t seem to be seeing all of the completely different sorts of fraud. Maybe they’re seeing one sort of fraud, however a unique bank card firm could be seeing one other sort of fraud.

Swarm studying could possibly be used right here the place every bank card firm retains their buyer knowledge non-public, no sharing of that. However a blockchain is available in and shares the learnings, the fraud knowledge studying, and collects all these learnings, averaged it and giving it again out to all of the collaborating bank card corporations. So that is one instance. Banks might do the identical. Industrial robots might do the identical too.

We have now an automotive buyer that has tens of 1000’s of commercial robots, however in numerous international locations. Industrial robots right this moment observe directions. However within the subsequent technology robots, with AI, they will even study regionally, say for instance, to keep away from sure errors and never repeat them. What you are able to do, utilizing swarm studying is, if these robots are in numerous international locations the place you can not share knowledge, sensor knowledge from the native setting throughout nation borders, however you are allowed to share the learnings of avoiding these errors, swarm studying can due to this fact be utilized. So that you now think about a swarm of commercial robots, throughout completely different international locations, sharing learnings in order that they do not repeat the identical errors.

So sure. In enterprise, you possibly can see completely different functions of swarm studying. Finance, engineering, and naturally, in healthcare, as we have mentioned.

Laurel: How do you assume corporations want to start out pondering in a different way about their precise knowledge structure to encourage the power to share these insights, however not really share the info?

Eng Lim: Initially, we have to be comfy with the truth that units which might be amassing knowledge will proliferate. And they are going to be on the edge the place the info first lands. What is the edge? The sting is the place you could have a tool, and the place the info first lands electronically. And when you think about 50 billion of them subsequent yr, for instance, and rising, in a single estimate, we have to be comfy with the truth that knowledge can be in every single place. And to design your group, design the best way you employ knowledge, design the best way you entry knowledge with that idea in thoughts, i.e., transferring from one which we’re used to, that’s knowledge being centralized more often than not, to at least one the place knowledge is in every single place. So the best way you entry knowledge must be completely different now. You can not now consider first aggregating all the info, pulling all the info, backhauling all the info from the sting to a centralized location, then work with it. We may have to change to a state of affairs the place we’re working on the info, studying from the info whereas the info are nonetheless on the market.

Laurel: So, we talked a bit healthcare and manufacturing. How do you additionally envision the massive concepts of sensible cities and autonomous automobiles becoming in with the concepts of swarm intelligence?

Eng Lim: Sure, sure, sure. These are two large, large gadgets. And really comparable additionally, you consider a wise metropolis, it is stuffed with sensors, stuffed with linked units. You consider autonomous vehicles, one estimate places it at one thing like 300 sensing units in a automobile, all amassing knowledge. An analogous mind-set of it, knowledge goes to be in every single place, and picked up in actual time at these edge units. For sensible cities, it could possibly be road lights. We work with one metropolis with 200,000 road lights. They usually wish to make each considered one of these road lights sensible. By sensible, I imply means to advocate selections and even make selections. You get to some extent the place, as I’ve stated earlier than, you can not backhaul all the info on a regular basis to the info heart and make selections after you have carried out the aggregation. Plenty of occasions you must make selections the place the info is collected. And due to this fact, issues need to be sensible on the edge, primary.

And if we take that step additional past appearing on directions or appearing on neural community fashions which have been pre-trained after which despatched to the sting, you’re taking one step past that, and that’s, you need the sting units to additionally study on their very own from the info they’ve collected. Nonetheless, figuring out that the info collected is biased to what they’re solely seeing, swarm studying can be wanted in a peer-to-peer means for these units to study from one another.

So, this interconnectedness, the peer-to-peer interconnectedness of those edge units, requires us to rethink or change the best way we take into consideration computing. Simply take for instance two autonomous vehicles. We name them linked vehicles to start out with. Two linked vehicles, one in entrance of the opposite by 300 yards or 300 meters. The one in entrance, with a lot of sensors in it, say for instance within the shock absorbers, senses a pothole. And it really can provide that sensed knowledge that there’s a pothole coming as much as the vehicles behind. And if the vehicles behind change on to routinely settle for these, that pothole reveals up on the automobile behind’s dashboard. And the automobile behind simply pays perhaps 0.10 cent for that data to the automobile in entrance.

So, you get a state of affairs the place you get these peer-to-peer sharing, in actual time, without having to ship all that knowledge first again to some central location after which ship again down then the brand new data to the automobile behind. So, you need it to be peer-to-peer. So increasingly, I am not saying that is carried out but, however this offers you an concept of how pondering can change going ahead. Much more peer-to-peer sharing, and much more peer-to-peer studying.

Laurel: When you consider how lengthy we have labored within the expertise trade to assume that peer-to-peer as a phrase has come again round, the place it used to imply individuals and even computer systems sharing numerous bits of knowledge over the web. Now it’s units and sensors sharing bits of knowledge with one another. Form of a unique definition of peer-to-peer.

Eng Lim: Yeah. Considering is altering. And peer, the phrase peer, peer-to-peer, which means it has the connotation of a extra equitable sharing in there. That is the rationale why a blockchain is required in a few of these circumstances in order that there is no such thing as a central custodian to common the learnings, to mix the learnings. So that you desire a true peer-to-peer setting. And that is what swarm studying is constructed for. And now the rationale for that, it isn’t as a result of we really feel peer-to-peer is the following large factor and due to this fact we must always do it. It’s due to knowledge and the proliferation of those units which might be amassing knowledge.

Think about tens of billions of those on the market, and each considered one of these units attending to be smarter and consuming much less power to be that sensible and transferring from one the place they observe directions or infer from the pre-trained neural community mannequin given to them, to at least one the place they’ll even advance in direction of studying on their very own. However figuring out that these units are so a lot of them on the market, due to this fact every of them are solely seeing a small portion. Small remains to be large when you mix that each one of them, 50 billion of them. However every of them is simply seeing a small portion of information. And due to this fact, if they simply study in isolation, they will be extremely biased in direction of what they’re seeing. As such, there should be a way the place they’ll share their learnings with out having to share their non-public knowledge. And due to this fact, swarm studying. Versus backhauling all that knowledge from the 50 billion edge units again to those cloud places, the info heart places, to allow them to do the mixed studying.

Laurel: Which might price definitely greater than a fraction of a cent.

Eng Lim: Oh yeah. There’s a saying, bandwidth, you pay for. Latency, you sweat for. So it is price. Bandwidth is price.

Laurel: In order an professional in synthetic intelligence, whereas we now have you right here, what are you most enthusiastic about within the coming years? What are you seeing that you simply’re pondering, that’s going to be one thing large within the subsequent 5, 10 years?

Eng Lim:

Thanks, Laurel. I do not see myself as an professional in AI, however an individual that’s being tasked and enthusiastic about working with prospects on AI use circumstances and studying from them. The range of those completely different AI use circumstances and studying from them–some main groups instantly engaged on the tasks and overseeing among the tasks. However when it comes to the thrill, really could appear mundane. And that’s, the thrilling half is that I see AI. The power for sensible techniques to study and adapt, and in lots of circumstances, present resolution assist to people. And in different extra restricted circumstances, make selections in assist of people. The proliferation of AI is in all the pieces we do, many issues we do—sure issues perhaps we must always restrict—however in lots of issues we do.

I imply, let’s simply use probably the most fundamental of examples. How this development could possibly be. Let’s take a lightweight change. Within the early days, even till right this moment, probably the most fundamental mild change is one the place it’s handbook. A human goes forward, throws the change on, and the sunshine comes on. And throws the change off, and the sunshine goes off. Then we transfer on to the following stage. If you need an analogy, extra subsequent stage, the place we automate that change. We put a set of directions on that change with a lightweight meter, and set the directions to say, if the lighting on this room drops to 25% of its peak, change on. So principally, we gave an instruction with a sensor to go together with it, to the change. After which the change is now computerized. After which when the lighting within the room drops to 25% of its peak, of the height illumination, it switches on the lights. So now the change is automated.

Now we are able to even take a step additional in that automation, by making the change sensible, in that it could actually have extra sensors. After which by the mixtures of sensors, make selections as as to whether the change the sunshine on. And the management all these sensors, we constructed a neural community mannequin that has been pre-trained individually, after which downloaded onto the change. That is the place we’re at right this moment. The change is now sensible. Sensible metropolis, sensible road lights, autonomous vehicles, and so forth.

Now, is there one other stage past that? There may be. And that’s when the change not simply follows directions or not simply have a skilled neural community mannequin to resolve in a solution to mix all of the completely different sensor knowledge, to resolve when to change the sunshine on in a extra exact means. It advances additional to at least one the place it learns. That is the important thing phrase. It learns from errors. What can be the instance? The instance can be, based mostly on the neural community mannequin it has, that was pre-trained beforehand, downloaded onto the change, with all of the settings. It turns the sunshine on. However when the human is available in, the human says I do not want the sunshine on right here this time round, the human switches the sunshine off. Then the change realizes that it really decided that the human did not like. So after a number of of those, it begins to adapt itself, study from these. Adapt itself in an effort to change a lightweight on to the altering human preferences. That is the following step the place you need edge units which might be amassing knowledge on the edge to study from these.

Then after all, when you take that even additional, all of the switches on this workplace or in a residential unit, study from one another. That can be swarm studying. So when you then prolong the change to toasters, to fridges, to vehicles, to industrial robots and so forth, you will notice that doing this, we are going to clearly cut back power consumption, cut back waste, and enhance productiveness. However the important thing should be, for human good.

Laurel: And what a beautiful solution to finish our dialog. Thanks a lot for becoming a member of us on the Enterprise Lab.

Eng Lim: Thanks Laurel. A lot appreciated.

Laurel: That was Dr. Eng Lim Goh, senior vp and CTO of synthetic intelligence at Hewlett Packard Enterprise, who I spoke with from Cambridge, Massachusetts, the house of MIT and MIT Know-how Evaluation, overlooking the Charles River. That is it for this episode of Enterprise Lab, I am your host, Laurel Ruma. I am the director of Insights, the customized publishing division of MIT Know-how Evaluation. We have been based in 1899 on the Massachusetts Institute of Know-how. And you could find us in print, on the internet, and at occasions every year world wide. For extra details about us and the present, please try our web site at technologyreview.com. The present is obtainable wherever you get your podcasts. In case you loved this episode, we hope you may take a second to price and overview us. Enterprise Lab is a manufacturing of MIT Know-how Evaluation. This episode was produced by Collective Subsequent. Thanks for listening.

This podcast episode was produced by Insights, the customized content material arm of MIT Know-how Evaluation. It was not produced by MIT Know-how Evaluation’s editorial employees.