Autopia
eBook - ePub

Autopia

The Future of Cars

Jon Bentley

Share book
  1. English
  2. ePUB (mobile friendly)
  3. Available on iOS & Android
eBook - ePub

Autopia

The Future of Cars

Jon Bentley

Book details
Book preview
Table of contents
Citations

About This Book

Cars are one of the most significant human creations. They changed our cities. They changed our lives. They changed everything. But in the next thirty years, this technology will itself change enormously. If Google get their way, are we all going to be ferried around in tiny electric bubble-cars? Or will we watch robots race a bionic Lewis Hamilton? And what about the future of classic cars?In Autopia, presenter of The Gadget Show and former executive producer of Top Gear Jon Bentley celebrates motoring's rich heritage and meets the engineers (and coders) who are transforming cars forever. From mobile hotel rooms to electric battery technology; from hydrogen-powered cars to jetpacks, Autopia is the essential guide to the future of our greatest invention. Fully designed with illustrations and photographs, this will be the perfect Christmas gift for car and technology enthusiasts everywhere.

Frequently asked questions

How do I cancel my subscription?
Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
Can/how do I download books?
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
What is the difference between the pricing plans?
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
What is Perlego?
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Do you support text-to-speech?
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Is Autopia an online PDF/ePUB?
Yes, you can access Autopia by Jon Bentley in PDF and/or ePUB format, as well as other popular books in Technik & Maschinenbau & Maschinenbau Allgemein. We have over one million books available in our catalogue for you to explore.

Information

Year
2019
ISBN
9781786496362

ONE

CONNECTED AND AUTONOMOUS

the Rise of the Robot Cars

Illustration
The bright-red Golf GTI weaved in and out of cones at the very limits of its grip. We took the sharp corner ahead and I was thrown to the left as the driver took a perfect racing line and began to accelerate out of the turn. I was being hurled round the test track at VW’s rather remote facility in Wolfsburg and feeling in awe of the test driver’s remarkable command of the course.
What made it more impressive was that the driver wasn’t even human. In fact, there was no visible driver at all. The steering wheel, accelerator and brake were all magically moving entirely of their own accord. This wasn’t even a Google car: it was 2006, when autonomous aspirations were yet to hit the mainstream. You can understand why I found the effect so stunning.
Car automation has a surprisingly long history. For decades, scientists have sought to slash the death toll on our roads by replacing the fallible human driver with a more capable technological alternative. Until recently such aspirations were confined to science fiction, their real-world potential thwarted by practicalities of technology and cost. But now, thanks to recent improvements in computer power, artificial intelligence, machine learning and sensor technologies, the impossible is becoming possible.
The driverless journey started with a radio-controlled car that hit the streets of New York in 1925. Inventor Francis Houdina fitted a brand-new Chandler with a radio receiver and ‘apparatus’ attached to the steering column. This turned in response to signals from a radio transmitter in a car following behind. According to a contemporary report in the New York Times, the car drove ‘as if a phantom hand were at the wheel’.
The initial unveiling didn’t go well. After making wildly uncertain progress down Broadway, the car narrowly missed a fire engine and crashed into a car full of news cameras recording the whole operation. Police instructed Houdina to abort the experiment. Even more bizarrely, the similarly named Harry Houdini became irritated by Houdina’s efforts and accused him of ‘using his name unlawfully in the conduct of business’. The famous magician broke into the Houdina Radio Control Co. and vandalised the place – a misdemeanour for which he was later summoned to court.
The automation journey stuttered on with ‘magic motorways’, which were first shown at General Motors’ ‘Futurama’ exhibit at the 1939 World’s Fair in New York. A brainchild of designer Norman Bel Geddes, the concept featured electromagnetic propulsion and guidance systems built into the road. Embedded circuits in the road were also behind experiments to guide cars by the American electronics company RCA. It started with model cars in 1953 and graduated to real ones in 1958. Sensors in the front bumpers picked up signals from a buried cable that provided information on roadworks and stalled cars ahead; the system would apply the brakes or change lanes as required. The company thought self-driving cars would be widespread on highways by 1975. The British government’s Road Research Laboratory (later the Transport Research Laboratory, or TRL) came up with a hands-free CitroĂ«n DS prototype a year or two later that worked in a similar way – and it too predicted that by the 1970s all motorways would feature a lane offering hands-free driving. Like many who followed, its claims were wildly optimistic.
illustration
The autonomy spectrum
There are six levels of automation as defined by the Society of Automotive Engineers:
Level 0 No automation.
Level 1 The most basic level of automation, whereby just one function of the driving process is taken over. The car might have lane centring or adaptive cruise control but not both.
Level 2 In which multiple functions are controlled – both lane centring and adaptive cruise control, for example.
Level 3 So-called ‘conditional automation’, where the car can take control of safety-critical functions but still needs a driver to be permanently paying attention in case intervention is necessary. A Level 3 car might take over driving in a low-speed traffic jam, for instance.
Level 4 Whereby cars are autonomous but only in controlled areas – say, a robotaxi operating on a housing estate. Level 4 cars do not need steering wheels or pedals. (Some wags have suggested that horses are a Level 4 autonomous vehicle.)
Level 5 The ‘fully autonomous’ stage. The car can take over completely and doesn’t require special lane markings or any other dedicated infrastructure; it really can self-drive anywhere, and the ‘driver’ can go to sleep or do anything they wish.
The first real stand-alone autonomous vehicle appeared in Japan in 1977, but it was far from being really roadworthy. Instead of buried electronics it relied on a huge computer that occupied most of the dashboard and the passenger footwell. Using information gleaned about its environment from inbuilt cameras, it could follow white lines on the tarmac – though only at a rather pedestrian 20 mph. Nevertheless, this was one of the first vehicles to move beyond level 0 on today’s autonomy spectrum, as defined by the American organisation SAE International, formerly known as the Society of Automotive Engineers.
German aerospace engineer Ernst Dickmanns upped the levels of speed and artificial intelligence with the help of a boxy Mercedes van. The VaMoRs prototype was tested successfully in 1986 and drove itself at 60 mph on the autobahn a year later. It led the pan-European research organisation EUREKA to launch the painfully named PROgraMme for European Traffic of Highest Efficiency and Unprecedented Safety, or PROMETHEUS project. With a significant injection of €749 million, researchers at the University of Munich developed camera technology, software and computer processing that culminated in two impressive robot vehicles: VaMP and VITA-2, both based on the Mercedes S-Class. In 1994, these piloted themselves accurately through traffic along a 600-mile stretch of highway near Paris at up to 80 mph. A year later, they clocked up 108 mph on a journey from Munich to Copenhagen that included a 98-mile stretch without human assistance.
Many manufacturers started developing limited autonomous features around this time, but they were strictly aimed at driver assistance and certainly couldn’t contend with the vast range of hazards we encounter all the time on the road. This would soon change when a new player entered the game: the US military. At the dawn of the twenty-first century, they sponsored the DARPA Grand Challenges, in which a $1 million prize was promised to the team of engineers whose vehicle could navigate itself fastest around a 150-mile obstacle course. Although no vehicles finished the inaugural event in 2004, it generated hype and helped spur innovation. Five vehicles finished the next year’s challenge, with a team from Stanford nabbing the $2 million prize.
The Stanford team caught the eye of a certain technology company called Google and the rest is history. In 2010, Google announced that it had been secretly developing and testing a self-driving car system with the aim of cutting the number of car crashes in half. The project, which would later be renamed Waymo, was headed by Sebastian Thrun, director of the Stanford Artificial Intelligence Laboratory, and its goal was to launch a vehicle commercially by 2020.
Six Toyota Priuses and an Audi TT comprised the initial test fleet. Equipped with sensors, cameras, lasers, a special radar and GPS technology, they were completely interactive with their environment rather than restricted to a prescribed test route. The system could detect hazards and identify objects like people, bicycles and other cars at distances of several hundred metres. A test driver was always in the car to take over if necessary.
Google’s involvement prompted an explosion of interest in the subject. Investment by established brands in the technology and automotive industries ballooned, along with a bevy of new start-ups. According to American think tank The Brookings Institution, $80 billion was spent on self-driving car attempts between 2014 and 2017. This may prove to be a giant capitalist mistake that’ll make the South Sea Bubble, tulip mania and the subprime mortgage meltdown seem positively rational by comparison.
As usual, the targets of when full-scale autonomy would really be achieved were often overly ambitious. It becomes easier to see why when you appreciate how these wonders of technology are actually supposed to work.

Sensing the road

This brave new world of genuinely intelligent cars requires a diverse array of hardware with which the car tries to gain an accurate perception of its environment.
The most expensive, spectacular and distinctive sensors on a self-driving car are LiDAR, which stands for Light Detection and Ranging, usually housed in a roof pod. These systems bounce low-powered invisible laser beams off objects to create extremely detailed and accurate 3D maps of their surroundings. Their field of view can be up to 360 degrees and, because powerful lasers are used, LiDAR has the advantage of working in any lighting conditions.
Scientists have been using lasers to measure distances since the 1960s, when a team from the Massachusetts Institute of Technology (MIT) accurately logged the distance to the moon by measuring how long the light took to travel there and back. Its pioneering use in cars began with an experiment carried out in 2007 by an audio-equipment company called Velodyne. Five vehicles equipped with the company’s revolutionary new sensor successfully navigated a simulated urban environment.
Around 2016, LiDAR could cost around $75,000 per car. As of 2019 this sum has fallen to around $7,500 for a top-of-therange unit. That needs to fall further and Ford is targeting approximately $500 as a cost for the component in the future. At present, most cars use one LiDAR unit, which creates a 360-degree map by either rotating the whole assembly of lasers or by using rapidly spinning mirrors. Many researchers think a key requirement of lowering the cost will be to create solid-state designs with few or no moving parts, eliminating the need for such spinning mechanisms.
Mirrors could possibly be eliminated by so-called phased arrays, which use a row of laser emitters. If they all emit in sync the laser travels in a straight line, but by adjusting the timing of the signals the beam can shift from left to right. Flash LiDAR is another possibility. This operates more like a camera. A single laser beam is diffused to illuminate an entire scene in an instant. A grid of tiny sensors then captures the light bouncing back from various directions. It’s good because it captures the entire scene in one moment, but it currently results in more noise and less accurate measurement.
Illustration
Laser-powered eyes on the road. LiDAR sensors are getting smaller and cheaper.
There are other stumbling blocks. Once most cars on the road have LiDAR they could soon start interfering with each other. Systems normally fire their lasers in a straight line and use a super accurate clock. They could be easily upset by lasers on other cars operating in the same range. Similarly, sceptics worry about the ability of the system to cope in awful weather. Lastly, to avoid eye damage, the lasers are fairly weak and currently limited in range to about 150 metres. For a car to accelerate and join a stream of fast-moving traffic, the laser range needs to be at least 300 metres. LiDAR manufacturers are working on increasing the laser frequency to allow stronger output with a beam that is further from the visible light range. As the systems improve, it is likely other shortcomings will be dealt with too. The technology already functions decently in snow and rain, and it is getting better at avoiding interference.
While LiDAR allows the car to ‘see’ over short distances, a different solution is needed for longer distances. This is where radar comes in. Many new cars already have radar sensors, used for adaptive cruise control, blind-spot protection and automatic emergency-braking systems. Their field of view is about 10 degrees and they’re relatively cheap at between £80 and £120 per sensor.
Traditionally radar’s main advantage is the ability to perceive distance and velocity. It can measure speed from a long way away and it’s a well-proven technology. Radar can even see round things. Its wavelengths are relatively long so there’s significant diffraction and forward reflection – you can ‘see’ objects behind other ones. On YouTube there’s a video, taken inside a car driving along, which shows radar in action when the car’s automatic emergency-braking system suddenly activates and the brakes are applied. The view ahead is showing nothing out of the ordinary; but half a second later the car in front rear-ends the car ahead of it. The car’s radar was able to see that the (optically hidden) car two cars ahead had braked suddenly, and then braked hard itself to avoid a crash.
Radar’s big disadvantage, and why it needs to be supplemented by other sensors, is that it can’t perceive detail. Everything’s just a blob. It’s no good at distinguishing between a pedestrian and a cyclist even though it can tell whether they’re moving or stationary. A Waymo’s LiDAR, on the other hand, can not only tell the difference but can also tell which way the pedestrian or cyclist is facing. Ultrasonic sensors are used to measure the position of objects very close to the vehicle too. We’re accustomed to them in those bleeping parking sensors.
They were invented in the 1970s and the first volume production car they appeared on was the 2003 Toyota Prius. Their range might be a mere 10 metres or so, but they are very cheap and provide extra essential information in low-speed manoeuvring and about adjacent traffic.
High-resolution video cameras are an important part of a self-driving car’s equipment. They are used to recognise things like traffic lights, road markings and street signs – objects that offer visual clues but no depth information. Cameras can also detect colour, which LiDAR can’t, and they’re better at discerning differences in texture. When in stereo they can also help calculate an object’s distance – although this effect diminishes the further away something is, which li...

Table of contents