Free Thoughts, Ep. 261: Emerging Tech (with Matthew Feeney)
Articles,  Blog

Free Thoughts, Ep. 261: Emerging Tech (with Matthew Feeney)

[Music] welcome to free thoughts I’m Aaron Powell and I’m Paul Mitsuko filling in for Trevor Burrus I am host of x’ newest podcast building tomorrow joining us today is Matthew Feeney he is director of the Cato Institute’s new project on emerging technology welcome back to free thoughts Matthew thank you for having me what is the project on emerging technology yeah the project on emerging technologies is Cato’s relatively new new endeavor so it’s I trying to count now I think it began a couple months ago June or July I should probably know that but it’s a relatively new I’m running it it’s a project of one at the moment but the the goal of the project is to highlight the difficult policy areas that are raised by what we’re calling emerging technologies now this is always a difficult thing to define right and of course emerging tech is not just changing technologies but new things arriving on the scene and what I’ve done is to try and highlight a couple of issues where I think Cato has a unique capability to highlight interesting libertarian policies associated with new tech so some of the policy areas that we’re focusing on include things like artificial intelligence driverless cars drones data and privacy issues and others there are a lot of tech issues that have been around for a while so I don’t think net neutrality is going anywhere anytime soon nor are the numerous antitrust issues associated with big tech companies and we’ve certainly Ocado had people write about those issues before but this new project is confining itself to five specific areas but I’m sure that as the project grows and develops the the list of issues we’ll be tackling will grow how did you choose those five in particular yes so the the five were areas where I thought Kaito didn’t have enough people writing about and also area so I think libertarians have something new and interesting to contribute so for example I’ve first couple of years at Cato I did write about the sharing economy I also wrote a little bit about drones body cameras new tech issues but my work with drones for example was just on law enforcement use of drones specifically the concerns associated with drone surveillance but I wasn’t writing at all really on the commercial use of drones so the exciting world of taco delivery drones and building inspection drones and that’s a whole different policy area really compared to drone surveillance so that was an area where I thought we should really have someone who can direct a project that will Commission work on those kind of those kind of issues another one would be so artificial intelligence right is something that I think is very exciting but poses difficult questions to libertarians and libertarian commentary on that space has been not nearly I think as robust and as loud as it could be so that’s another reason why I picked that but yeah basically the the five I think fulfill the criteria of being focused on new and emerging tech that libertarians have something interesting to say about and that caters in a good position to to tackle can you give this example of what you mean by libertarians of something new and interesting to talk about because a lot of a lot of tech policy in the past has taken the form of its regulatory policy and its should this thing be regulated or not typically and then what form should have be regulated in and and that tends to break down along the standard lines you have the people who are opposed to regulation you have people who were generally Pro regulation but what’s what’s uniquely I guess libertarian in in the way that you’re approaching technology issues yeah so don’t think that the way that we’re approaching the project is much different to how a lot of us here in the building approach our other policy areas and for me it’s to tackle the issues raised by this tech by embracing a presumption of freedom and trying to minimize coercion right so that’s so number one on the presumption of freedom that we should we should act in a way that allows for innovation and entrepreneurship and make sure that people working in this space are in a position where they’re asking for forgiveness more often than they’re asking for permission and as far as minimizing coercion this goes back to some of the work I discussed earlier when we’re talking about data privacy and drones we should be wary of some of the government use of the technology and making sure that exciting new technologies like drones can be used for really cool stuff like deliveries and other private applications while also trying to make sure that the scary aspects of it like surveillance are being being put under lock and key as much as possible something like artificial intelligence might be another good example that we want to make sure that people working in the space are free to innovate and to explore new ideas but we want to make sure the government use of it especially when it comes to autonomous weapons and automate automate assailants that we ensure that there are privacy’s then keep those threats checked so emerging tech by its nature is still you know yet to come it’s already not yet it’s kind of here but it’s still in prototype or developmental form so a lot of the potential benefits as well as potential risks are still in the future so like as you’re trying to decide what should be regulated and what shouldn’t be regulated or in what ways it should or should not be regulated like what’s your rule of thumb for trying to rule on decide on something that hasn’t actually happened yet yeah I suppose the libertarian response to this is a comparatively straightforward right we should proceed with caution when dealing with imaginary threats so let’s think of a good example right maybe only because I work on it in my own research right but I think it’s fair to say that in the coming decades that we will see more and more government use of unmanned aerial surveillance tools I think that’s a fair assumption I also think it’s fair to say that that technology will improve as much as it proliferates and as I did write I wrote a paper saying look we should in preparation for this world we should have the following we should have the following policies in place what I’m very hesitant to do and not that it should never be done right but we should be hesitant I think to develop new rules because of a new thing coming on to the block drones for example raise interesting privacy concerns but it’s not clear that they’re necessarily unique in the way that a lot of people think they are so we don’t like the fact that drones could be used by people to snoop on our us in our bedrooms or to fly over our barbecues and we don’t like that police could use them to to do surveillance but we already have laws with peeping tom laws we have a tort system that can do off a lot of these complaints and while the Supreme Court precedent on things like drone surveillance is not particularly not very satisfying it is the case that states can and have gone above and beyond what the Supreme Court requires so going forward I think we should be hesitant to think of well we need a driverless car policy we’re gonna write down or we need a drone policy we should think about the kind of threats that come from these these fields but resist the temptation to write a lot of regulation and anticipation for the proliferation of the technology but isn’t that the problem that because these are emerging technologies they’re they’re not technologies that we either as citizens or just ordinary people in our lives or as lawmakers or legislators regulators there we don’t have any experience with them we haven’t used them we haven’t seen like how they shake out and so that that notion of saying well you know we should we shouldn’t just imagine threats isn’t that what we’re kind of forced to do one of the things that can just that distinguishes emerging technologies now from emerging technologies in the past is the pace at which they can become all pervasive the pace at which they can spread so either their network technologies that just you know in a matter of years suddenly everyone is on Facebook whereas you know the printing press took a lot longer to get books into everyone’s hands that don’t we have to be anticipating threats because it with a lot of this stuff if we don’t and we don’t protect ourselves now it might be too late well too late for what right this is the the question I think history has enough examples of people exaggerating threats that we can learn from so one of my favorite examples of this right is the the British 1865 locomotive Act which required a vehicle that not pulled by an animal so a steam-powered locomotive if it was on a road and towing something it was legally required that you would have a man 60 60 yards ahead of it with a red flag right because people were anticipating certain threats right that these these new technologies are gonna cause accidents and so what we need is it’s obvious right we need a man running ahead of these things with a red flag to to alert people that this very dangerous thing is coming across I don’t know if that’s the right kind of approach to dealing with emerging technology issues right we we can anticipate that with the emergence of the locomotive that there will be occasional accidents and some people will get hurt the the early years of flight for example are just full of people killing themselves in these new flying machines and you might it sounds a little cold harder to say but the price of innovation for something like that is that mistakes get made and people might get hurt and and it’s difficult especially in today’s world where news travels so quickly that the moment that someone gets hit by a driverless car or a drone lands on someone’s head everyone’s gonna hear about it and I think people are thirsty for news for for bad news unfortunately and that’s something we’re always gonna be fighting against so I actually would go on record right now saying I’m in favor of a law requiring the Elon Musk wave a red flag 60 feet in front of every driverless vehicle because he has more time on his hands so I hear you talking about essentially assumption of risk that with when it comes to tech we have a long history of people over rating or exaggerating fears of the downsides of attack and having a harder time imagining the beneficial applications and so a light touch regulatory policy wedded with like a general cultural sense of hey if you want to experiment with this as long as you limit the externalities the the damage other people go for it I mean is that kind of the ad – you bring this stuff like on it on you know unmanned vehicles and like yeah I think that the the barrier for government intervention in this space should be difficult to overcome right so it had a very high risk of death or serious injury is basically where I would say you can maybe argue for some kind of regulation and again we’re sitting in the Cato Institute right I mean our approach to regulation this is a unique approach to emerging technology I think libertarians across the board have light-touch approach and I feel like you can have that approach while accepting that there are risks right and the the problem of course though is that with a lot of this stuff an argument can be made that innovators and entrepreneurs might be hesitant to start doing a lot of this work if they feel like they might get in trouble or they want to wait until there is a safe regulatory space so Amazon right decided to test its delivery drones in England because they knew that the FAA had not cleared the the drones delivery drone testing here so I can understand why Amazon didn’t say yeah well screw it we’ll do it anyway you know people want to be I think if you want to be a respected private business you don’t want to get in trouble with the feds I get that but I think that’s an unfortunate feature of FAA regulation that the FAA should have an approach of you can you know you will better be careful because you will be in a position to ask forgiveness but I still think that’s a better position than people in the drones base asking for permission but I mean and going kind of back to the question I asked before with emerging technology and with the to quote Donald Rumsfeld the unknown unknowns in you know at play here do we want people to be as extra-special careful in a lot of these areas because you even have situations where so the the story often gets told a lot of people like you know this is this is the narrative is that all of a sudden a handful of people in palo alto well no one was watching broke american democracy with social media right or or a situation where you know that everyone’s kind of out there innovating and then suddenly we have a rogue AI and we can’t do much about it or or you know like gene splicing CRISPR people making stuff in their in their garages and then we have a pandemic like that that that kind of threat of regulation or that asking for permission does that help at least to mitigate against those kind of sudden catastrophes well I think you’re highlighting something interesting namely that well first I’ll say hindsight’s always 20/20 right that it’s easy to look back like wow if we had X regulation why would never have happened but it’s easy for people to come up with scenarios the difficult job is thinking of regulation that would hamper that scenario if I’m ever taking place while also not hurting innovation so rampant AI okay so this is something anyone who’s what a science-fiction film worries about but what’s the fix to that do we write a law saying no one shall build AI that will run amuck on servers and take over that I mean it’s isolating a threat is not the same thing as coming up with a good regulation for that threat and so social media social media companies ruined American democracy so this is sometimes said by people but what what’s the regulatory fix that would have stopped a lot of the the bots and the trolls that got everyone concerned in the wake of the election that that’s a much harder question it seems to me it’s easy to get outraged and to get worried about possible threats but coming up with solutions is much much harder and I think we should also keep in mind how likely the threat is it would be a shame if developments in AI were seriously hampered because a couple of lawmakers watched too many science fiction films and got really really worried about the you know the terminators are well how big of a problem is is that specifically that this is an area where lawmakers I mean we put the Cato Institute we often lamented how little lawmakers seem to know about the subjects they plan to regulate and in fact we have named our auditorium the FA Hyack auditorium who you know Hayek famously offered a theory for why it was lawmakers could never know enough about the stuff they wanted to regulate to regulate it well but this seems to be an area where lawmakers are particularly ignorant that it’s I mean it’s it’s often cringe-inducing to watch like congressional testimony because these lawmakers have levels of understanding of the Internet of networks of technology that is substantially worse than you know the typical middle schoolers so is that how do we deal with that kind of problem that we’ve got we’ve got a situation where lawmakers there’s this tech they you know they the urge is always to pass a law whenever there is a threat or potential threat it’s passed a law and they they’re doing that because they want to do it they’re also doing it because constituents you know demand pass a law but that this is an area we’re almost like by definition you can’t know much about it yes I defy anyone under the age of 30 to watch anything like soccer Berg’s testimony on the hill and not have their head in their palms by the end of it it is very worrying that many of the lawmakers on the hill don’t seem to know much about this and that makes sense because a lot of the people who’d be qualified to be on staff in these offices to actually give advice and to explain to members of Congress how the stuff works could be paid much much much better almost doing anything else actually in the tech industry and that’s that’s a serious worry and there’s also this worrying inclination among some lawmakers to urge technology companies to and I quote this isn’t a phrase original to me but to nerd harder right there whenever there’s a problem like end end encryption people think well we don’t like the fact that some terrorists can communicate using whatsapp or signal but there must be a fix you must you know how can you not fix this and there’s a there’s a frustration there where we’re sitting i I think that we should maybe spend more time focusing on the benefits of this technology not focusing on potential costs so driverless cars will kill some people they just will and that’s of course regrettable but we should think about the lives that they could save the vast majority of auto fatalities in the United States are directly attributable to human error so from that perspective driverless cars that are better than human drivers but not perfect will save thousands and thousands of lives a year and once Congress eventually gets happy with the proliferation of driverless cars we should expect that for the next couple of years there will be headlines of driverless cars killing people and that’s to be expected and it will be a big cultural shift so emphasizing the benefits rather than the costs I think is is worthwhile both that’s easy for me to say because I won’t be the one sponsoring the bill that allows these things to run rampant and then who are they gonna wag the finger at when the bad things do happen but like I looted too early a good news rarely makes headlines and it’s also slow moving right it will take a long time for the benefits of driverless cars to be realized in the data but the accidents and the deaths will be reported instantly so I hear from you Matthew is a sense that our cost accounting or cost-benefit accounting analysis is flawed right it’s easy for us it’s kind of a scene versus the unseen situation it’s easier for us to imagine apocalyptic worst-case scenarios and then to discount the possible benefits so whether it’s you know pharmaceutical regulation you know something like the FDA has a notoriously stringent safety requirement that doesn’t really account for the fact that not approving a life-saving drug drug costs thousands even millions of lives and that doesn’t play a role they just are asking whether or not the drug itself will harm lives so in that sense we have a you know the the ledger the kind of accounting ledger is is flawed when it comes to emerging technology but I’m also interested in hearing you talk about ways in which regulate regulators themselves by regulating too quickly it can actually fulfill kind of self-fulfilling prophecy when it comes to kind of the downsides of that technology so a good example of that would be what I just want to make sure I understand a question so I suppose you can imagine a situation right where the FAA says well we haven’t had as many drone accidents as other countries because we haven’t let drones fly mm-hmm right which probably an accurate statement we need to keep in mind that while that’s true and the FAA is tossed with safety right they need to make sure things are safe we need to also keep into account while we’re losing I think when you ground drones you in a cost namely you are not having as innovative and as exciting an economy as you could have so yes a federal safety agency can stand up and say bad things aren’t happening because we’re just not letting people experiment but it’s not a particularly useful thing to say it seems to me and it’s also not helpful because no one who’s rational is denying that emerging technologies will come at a price we’re just saying that in the long run the benefits outweigh the cost given that and given that bad regulation or over burdensome regulation can not just slow down the pace of progress but can cost lives can certainly reduce wealth economic growth when is it appropriate and we’ve seen this happen a fair amount in the emerging tech space when is it appropriate or is it ever appropriate to intentionally circumvent regulations so we’re the part where Aaron asks me when is it okay to break the law so I would like to point out that I think there are a lot of people who do this by accident right I don’t know the number but I imagine there are many people who got drones for Christmas or birthdays right and flew them without adhering a hundred percent to FAA regulation I can say that with almost a certainty the response from the FAA I think should not be to bring the hammer down now when when is it acceptable I mean I don’t know anything sorry good the classic example being like uber which uber has arguably changed the world in and frequently in a positive way they’ve granted they have their problems as a company but a lot of that came with them basically ignoring local regulations okay in that case I would argue that at least in some of the jurisdictions Ubu could have made the argument that well we looked at the taxi regulations and we decided that we didn’t fit the definition of taxi so off we went that’s a much easier argument it seems to me then a drone operator saying that they’re not a an aircraft under FAA definitions uber I think was doing something very interesting which was providing an obvious providing obvious competition to an incumbent in industry without being actually a very different thing to two customers behind the scenes I think a lot of people found Oberon taxis to be very similar but actually the very different kind of businesses and it’s a very different kind of technology I take your point and of course ubers ubers opponents would oftentimes portray portray Hoover as a as a lawless invader I think at least in some jurisdictions uber could make the argument that actually no we just feel like we didn’t fit into that regulatory definition and uber does fit into this very well at least when it began fit into a very awkward regulatory gray area so in a situation where you’ve taken a look at existing regulations and you think that you don’t actually run afoul of any of them I don’t see why people shouldn’t feel free to get into an area and innovate a B&B might be another example where you okay well I took a look at local laws and I figured that I wasn’t a hotel seems to be a reasonable thing for people to assume but I won’t say this is without risk you know I wouldn’t advise anyone in a private company to deliberately break the law and to hope that you have good lawyers on hand I don’t know if that’s the the best approach because local lawmakers don’t like don’t like that kind of confrontation for sure I mean I suppose there’s a some of that question comes down to one’s own ethic right I mean most people have imagined and an ethical obligation to break the law when there is some kind of clear cost to life that comes from following the law I mean so you know civil disobedience writ large you know and no one it well some people did hold them responsible but when you Martin Luther King jr. or another civil rights activist blocks the highway for a march on on Selma Birmingham or whatnot right like it’s the idea is is that laws or it’s okay to circumvent them when there’s a clear epical obligation to do so that the law is less important than than like ethical systems so that that gets complicated really quickly depending on I will mention here though that charles murray i think it’s I haven’t read the book but I think that in one of his most recent books Charles Mary advocated for a law firm that specializes in protecting entrepreneurs like this to basically encourage people to go out into the the wilderness Adam fear from makita’s who wrote an excellent book called permissionless innovation he dafuq categorizes technologies is born free and born captive that some are born captive into regulatory regimes and others are born free that truly new and innovative and regulators haven’t caught up yet but if you’re born free as Adam might call them I think you better be ready for certain fights and the Charles Murray’s recommendation was yeah we should just basically have law a law firm that specializes in helping entrepreneurs with these kind of fights from the regulator’s point of view I think they should perhaps just choose their fights more carefully and and not scare away people but that’s not gonna happen anytime soon and we you know the costs that we’ve been talking about like like deaths and injuries are I think easier to discuss but the problem with a lot of technology or emerging technology discussions are you have these more difficult to pin down complaints about the impact on society and what’s it doing to our children and isn’t this making them us more isolated think about the citizenry all that sort of stuff is thank you tipper gore well right it’s it’s interesting because this isn’t a new kind of complaint right but nonetheless remains sticky I wanted to to briefly read out an app a quote I found from 1992 there was a Neil postman postman sorry wrote a book called um technically the surrender of culture to technology and he was on c-span in 1992 and he previously complained about television right and he was he was on and he said when I started to think about that issue television I realize that you don’t get an accurate handle on what we Americans were all about by focusing on one medium that you had to see television as part of a kind of a system of techniques and technologies that are giving the shape to our culture for instance if one wants to think about what has happened to public life in America one has to think of course first about television but also about CDs and also about faxes and telephones and all the machinery that takes people out of public arenas and puts them fixed in their home so that we have a kind of privatization of American life this is a really interesting kind of complaint but he goes on to describe a future that were kind of in now where he says when his people say with some considerable enthusiasm that in the future putting television computers and the telephone together people be able to shop at home vote at home express political preferences in many ways at home so that they never have to go out in the street at all and never have to meet their fellow citizens in any context because we’ve had this ensemble of technologies that keep us private away from citizens and I hear complaints like this quite regularly I mean that’s from 1992 but there is still there’s a very persistent worry that emerging tech will make us and make us bad citizens make us isolated AI is exciting but what what do you will will our children say please and thank you to the robots will the robots become our friends or our sex partners you know this is isn’t all this stuff making us kind of isolated and this isn’t a new concern frustrating and it’s not going away so we have been talking largely about policymaking policymakers regulators people who are in the in the the policy world but how much of that is really just downstream of culture such that when we’re talking when we’re dealing with these issues of emerging technology that where the real action is happening is in the culture is and the cultural acceptance of it and so to some extent focusing on the on strictly the policy is kind of missing where much of the influence is or will be I certainly do think that it’s important to communicate to the public about this because like you mentioned some of these policy concerns are downstream from what from the public and in preparation for the podcast I was finding articles from you know 18:59 editorials in the New York Times complaining about the Telegraph and a 1913 New York Times article complaining about the telephone and how it’s in current bad manners all this stuff isn’t new but I think when we’re sitting in a think tank we should be ready to communicate with the public in addition to regulators and lawmakers if if we have a optimistic forward-thinking public then you hope that that will translate somehow and translate somehow to lawmakers but yeah lawmakers are made up of human beings and the public or human beings and they have a pessimism bias and I think though when you focus again on on benefits that maybe more parents would be happy if driverless cars could take their kids to baseball practice and it would be better for people if their elderly parents have appliances and homes that can monitor if they’ve fallen down or if they have had a medical emergency it would be good if we were able to the travel more safely to have our homes know more about us it would be nice to to come home and to have the home you know sat at the right temperature and playing the right kind of music making sure that people realize the benefits of a lot of this stuff is is I certainly think part of part of the mission my only audience is not lawmakers that’s for sure all of that the home that knows a lot about you that all these things that can predict stuff about you keep track of things about you there’s a lot of data there there’s a lot of data gathering a lot of it depends on devices that can surveil us in in one way or another and we as libertarians we as Cato Institute scholars we spend a lot of time talking about the problems of government having access to data and government surveillance programs but are we concerned should we be concerned about the level of pervasive private surveillance that that rosy future you just sketched out demands I think we should be worried you can listen and read a lot of Cato material on the concerns that we have about government access to data and I certainly don’t want to sound blase about that so I’m my primary worry is the government mostly because as creepy as a lot of this might be when it comes to Amazon and Google Amazon and Google contour SME will put me in a cage I think that is a big difference people might be a little creeped out by these shopping algorithms they might be a little freaked out by the fact that these companies do know a lot about us but I want the heavy lifting there to be on government access to that data you you buy a lot of these appliances there’s a certain degree of a you you assume that they will be collecting information about you but I’m not as worried about Amazon as I am the government for the reasons I just outlined and I don’t think Amazon has an interest in creeping out as customers too much should we be worried though about companies like Amazon gathering all this data centralizing all this data and then that data suddenly becoming either through the passage of legislation or through subpoenas or warrants or through government hacking accessible to the government yeah there’s a degree of trust you have in these big companies they need to do a good job at being custodians of data I don’t want to speak to the I don’t know a lot about Amazon’s actual security just using them as an example they have a very strong profit seeking incentive to make sure that their customers privacy is is not violated there’s not much though that they can do right when the government comes to them with a valid court order they they you know are put in a tough spot and and again that’s why I think that’s where we should have the focus but we shouldn’t be of in any doubt that a lot of these companies have a huge amount of information on us and I think it was my colleague Julianne who once said that you know if Google was a state it would be a pretty powerful police state given the amount of information has my apologies to Julianne if I’m butchering your quote but the point being that we they do gather a huge amount of information on us and people even like me right I do incur a cost when you use protonmail instead of Gmail or you use DuckDuckGo instead of Google for web searches and that cost is that you know Google now knows a little less about you and can’t provide you with the degree of service that most people have but that’s fine by me there’s still choice Google’s not a monopoly when it comes to this sort of stuff so and people value their privacy subjectively and maybe I value it as slightly higher than the average person but I have no problem with people using Google products to make their lives better I do worry about government access to that data to to conduct investigation it feels like forever ago now but it’s only a few years ago folks were there was buzz about Mark Zuckerberg running for president it’s that blend of a major you know a major tech company with the power of the state while it’s you know unlike now it’s not outside the realm of possibility even if it’s not as literal as the head of one being the head of the other um something to go back to to something mentioned before Matthew you teased a bit about how in Great Britain I think it was regulatory policy towards unmanned aerial vehicles was more favorable so it pushed you know Amazon to conduct tests overseas so to broaden that out how would you say like on the net international regulatory the International regulatory landscape how it compares to the United States like where is the u.s. ranked when it comes to relative freedom and regulation of emerging technology I think it’s difficult to say for the following reason but saying technology policy is a bit like saying economic policy right it’s a it’s a huge range of things so let’s think of the plus side first so the United States is still a global leader when it comes to tech innovation this country is home to some of the best-known largest and most interesting tech companies global data recently produced a list of the most the 25 most valuable tech companies in the world 15 are in North America 7 in the asia-pacific only 3 are in Europe and that I think is not an accident Europe is as you alluded is slightly I would say ahead of the United States when it comes to drone policy but they slapped Google with a huge I think it was 5 billion dollar fine on antitrust there’s so it depends on the technology you’re talking about there certainly ahead when it comes to I would say drone policy but when you’re leveling billion fines worth billions of dollars on Google right it’s not a great look and so examine the policy the the technology specific policy I wouldn’t want to go to a big generalization I would say though that there’s probably a reason that the United States is still today a massive hub and funder innovator when it comes to technology does competition work in that area so do you see is there evidence that countries look over at other countries that have better tech policy if so are getting better bigger country companies more innovative products and say well I it’s probably good for me to loosen things up a bit too I don’t know I’d have to look at data I think the problem is for a lot of these countries is that a lot of the Silicon Valley is still a massive talent suck for a lot of these a lot of these countries that’s that’s a gut assumption I’d have to look at data on that competition of course is is an interesting point when you’re talking about big companies like Google Apple Amazon a Facebook because a lot of those companies are big enough that they can buy interesting smaller companies so what would be a good example yeah YouTube Instagram whatsapp these are all companies that were bought by by much bigger companies and that’s not necessarily a bad thing and it’s not necessarily something that we should complain about but for the foreseeable future I imagined that Amazon Google Facebook and Apple are going to be on the lookout for interesting new companies to buy one because they view them as competition down the road but two they also feel that they can do interesting things with those companies and that’s not a that’s not a bad thing necessarily if you are building something that competes with Amazon and you’re presented with a life-changing amount of money there will be some people who say no thanks I’ll keep plugging away what I’m doing I believe it’s the case I’m not a historian when it comes to Facebook but I believe Facebook faced a buyout option at certain point right didn’t someone want to buy Facebook could be making that up but my point is that there are very large successful companies today that said no to to buyout flow Netflix famous yeah that might be a blockbuster had the offer on the table for you know some minuscule fraction of what Netflix is value that right and keep in mind that this this competition question is something we’re going to hear more of as long as Trump is the president because there’s a perceived anti conservative bias in Silicon Valley that people think is actually affecting the product so I think it’s fair to perceive that most people who work in these big tech companies are probably to the left of the average American I think that’s fair to say that I’m not convinced that that personal bias among employees has had a direct impact on the products and you’ve had your in this weird situation where self professed conservatives wrote oh there are now saying well they’re too big and we should talk about han tea trust when we’re thinking about the big four Google Amazon Facebook and Apple I’m not convinced that these companies are monopolies in the true sense and I think would be a mistake to bring antitrust action against them so the example that comes to my mind of international competent Ori competition from TechCrunch Disrupt out in San Francisco a number of panels hit on the idea that when full self driving cars level-5 you know no steering wheel when that gets rolled out it’ll be rolled out in China before it gets rolled out in the rest of the world and that will be because according to a number of speakers the central government in China has just established by Fiat we are going to be open to autonomous vehicle technology and actually by like the dollar value of investment in China just over the past year has matched in a V technology has matched the rest of the world combined so you’re seeing kind of that they’re shifting to a place because in China the central party can cut through local and state level competition what that brings to mind for me though is a question for you Matthew about how emerging tech should be regulated by local and state authorities for federal authorities like the question of federalism and emerging tech policy how do you approach that as someone analyzing emerging tech I’m very interested in a lot of the local regulations that handle industries like ride-sharing and other things you see in the the sharing economy but when it comes to a lot of the technologies we’ve discussed that are very powerful federal regulators the FAA at the FCC with bioengineering and all that the FDA so I am in a position where I am mostly focused on federal regulations but I’m certainly keeping an eye on what’s happening at the local level and as we discussed earlier state and local governments can take it upon themselves to address some of the concerns we’ve discussed when it especially when it comes to drone surveillance was an example I use and there are state and local governments that have been comparatively welcoming to the sharing economy that they have decided no we’re going to be a home of innovation and entrepreneurship and that’s what we want but I think it’s fair to say that for some of the big issues we’ve been discussing today driverless cars and drones and things like this ultimately is probably going to have to take some federal leadership to get the kind of regulatory playing field we want implemented [Music] thanks for listening free thoughts is produced by tests terrible if you enjoyed today’s show please rate and review us on iTunes and if you’d like to learn more about libertarianism find us on the web at


  • Project Malus

    The last comment by the guest, about tech innovation requiring federal regulation, seems to be a problem for those wishing to minimize the state. Further, if consumers are too fragmented to have any real power in forcing big companies to change policies, and the state is minimized (perhaps thru demographics and therefor less revenue for the gov), doesn't this mean large companies effectively become the state? This raises questions about the ability of grassroots organizations to engage large companies. At that point, it may be necessary to restructure into smaller groups which have more economic clout than individuals or families, while avoiding the anonymity (which dilutes individual, and therefor collective action) and hierarchy (that encourages corruption) that arises when groups become too large. In other words many more smaller groups but connected effectively thru internet communication, to enable economic boycotts that will replace or combine with civil protest.

    This idea seems to me to be the way to minimize both the state and large companies since it is non-violent and works thru small iterative change. Trying to engage such large structures on their own ground seems fruitless.

  • Tom Blackstone

    Can anyone tell me how they created the video for this (the image with the vertical bars that move when people talk)? I want to create a YouTube podcast, but have so far found the process of converting an audio file into a YouTube video to be too time-consuming. I'm wondering if whatever they used to make this video would cut down on the time used.

Leave a Reply

Your email address will not be published. Required fields are marked *