It Doesn't Have To Be Right…

… it just has to sound plausible


Science fiction has lost the plot

I recently finished The Dog Stars by Peter Heller, which was not published as science fiction but was shortlisted for the Arthur C Clarke Award last year. In it, a flu pandemic has killed 99% of the population of the US, and the survivors have, of course, turned to warlordism and survivalism. It’s not a very good book – its presence on that shortlist is, frankly, mystifying. One character appears to be ripped off from John Goodman’s part, Walter Sobchak, in The Big Lebowski; and the narrator apparently suffered minor brain damage previously from a bout of meningitis and so narrates the novel in mildly-broken English… which serves no purpose in the story at all.

Anyway, warlordism and survivalism… There’s a long tradition of such post-apocalypse tales in science fiction and I’m sure we can all think of at least half-a-dozen examples. I’ve objected before to the assumption that the survivors of any apocalypse would immediately start killing each other, when clearly cooperation is the only sustainable strategy for survival.

And then there’s the dystopia, a much-beloved setting for YA. In almost all cases, a privileged elite enjoy lives of luxury while the bulk of the population either scrabble for a living below the poverty line, or are rigorously oppressed with no freedom to object; or both. I can understand the dystopia’s appeal for the YA market. In order to “break” the setting, which is the point of the story, the protagonist needs to be a super-special snowflake – which not only feeds into teenage narcissism but also relies upon, and reinforces, the risible “Great Man of History” theory, which is itself the sort of nonsense kids believe.

It could be argued that such dystopias only reflect the real world, that their popularity is a symptom of the times we live in. Perhaps that’s true. Certainly the UK is currently governed by a cabal of greedy fascists who are hell-bent on selling off as much of the country as possible to their plutocrat friends. There is not much difference between Downing Street and Panem’s Capitol.


It strikes me that these two branches of science fiction are actually conditioning us to accept our current situation. Dystopia readers are waiting for a Katniss – and then everything will be all right. Post-apocalypse readers know they’re currently better-off, even if they’re being oppressed, than they would be with gangs of marauding slavers, rapists and murderers roaming the countryside. Science fiction was once a literature which encouraged change, which explored ways and means to effect changes. Now it’s comfort reading, it makes us feel good about our reduced circumstances because at least we’re not suffering as much as the fictional characters we read about.

And if it’s not apocalypses and dystopias, it’s interplanetary or interstellar wars. Making us feel good about our governments’ military adventurism. And fictional universes that embody so many libertarian sensibilities it’s becoming increasingly hard to argue that right-wing politics are not the default mode for the genre. Even left-wing authors create worlds built on right-wing principles, as if dramatic stories were impossible any other way. Which is simply not true.

Once upon a time, science fiction was driven by an outward urge. True, we know a great deal more about our planet and our universe than we did then. But there is still a lot we don’t know – the depths of the oceans, for example, remain mostly unexplored. We’ve found over 1800 exoplanets, but the furthest we’ve trod is our own moon, 400,000 km away – and that was over forty years ago anyway. What happened to that urge? Where are the science fiction novels inspired by it? I can perhaps think of only a handful published in the past twelve to eighteen months which might qualify.

The bulk of sf currently being published seems more designed to accommodate us to our meagre lot. It’s not holding up a mirror to our times, it is complicit with those forces which shape the modern world. It is telling tales to maintain the status quo by showing just how improbable, how impossible, meaningful change is.

A friend is currently trying to put together a list of sf novels about climate change – and it’s perhaps telling that most such science fictions take place after the climate has crashed. It’s almost as if we’re unable to prevent it – it’s going to happen and there’s nothing we can do about it. Except, of course, there is. There are lots of things we could do. But certain powerful interests in the modern world don’t want the changes preventing climate crash would entail. So we have become resigned to consuming stories in which climate crash is a faît accompli.

Back in 1926 when Hugo Gernsback published the first issue of his magazine and so created the genre, he saw “scientifiction” as a possible force for good. And it’s certainly true that fiction can have profound effects on the real world – and not just in terms of inspiring nerds to invent new gadgets. These days, however, science fiction has all importance of middle-class fad foodstuffs. We consume it like we consume Greek yoghurt – and it’s not even that, it’s more like a bee flew over a pot which was then filled with curdled milk from a dog they found wandering the back streets of Athens…

So what went wrong? When did we become so resigned to the present, so resigned to our powerlessness, that we began to ignore not only change but the possibility of change in our science fictions? And what can we do about it?


Fables of the Deconstruction, #1: Robots

All too often, people point at the tropes in a piece of fiction and use them to categorise it. This story has spaceships in it, therefore it’s science fiction; this one has elves, so it must be fantasy. One of the tropes often used to “identify” sf is the robot – well, a robot is clearly the product of technology, it’s an artificial person, a mechanical man or woman (or neither). What’s not science-fictional about that?


The term “robot” comes from Karel Čapek’s RUR (1920), and is derived from the Czech word robota, a local form of serfdom in which serfs had to work only for a specified number of days each year for their liege. RUR was first translated into English in 1923 but, according to the OED’s Science Fiction Citations, the word’s first appearance in English wasn’t until 1925, in a novel by French-born British writer Thomas Charles Bridges, The City of No Escape. However, it was the mid-1930s before “robot” appeared in US science fiction magazines. It was then, of course, co-opted by Isaac Asimov, who wrote some forty short stories and a few novels (it’s hard to be precise as Asimov spent much of his later years trying to stitch his oeuvre into one great stupid shared future history, featuring both psychohistory and robots).

Čapek’s robota were actually biological – what are now commonly referred to as “androids” – so I’m not entirely sure why the term was adopted for purely mechanical beings. Perhaps this was because the mechanical being was an already existing trope: the automaton. (The SF Encyclopedia indicates there was a story in the November 1931 issue of Amazing titled ‘Automaton’.) But automata were real things – marvels of mechanical ingenuity, show-pieces, designed to display their inventor’s cleverness and so win them the patronage of some wealthy potentate; and they were often fake (the Mechanical Turk, for example). Automata were typically good for a single task, and in no way a replacement for a human being.


Go even further back, of course, and you have the golem, an automaton powered and controlled entirely by magic. There are also automata in Greek mythology, built by Hephaestus – such as Talos, the giant bronze man who protected the island of Europa (although it seems the clockwork owl in Clash Of The Titans is an invention of the film’s writers). But neither automata nor golems fit in with early science fiction’s burning enthusiasm for science and engineering, for technology. If electronics magazines showed readers how to build their own television sets, their readers were hardly likely to be interested in a mechanical servant which required magical incantations to operate.


And yes, servant – because technology exists, so these magazines would have you believe, to make life easier and more comfortable, and what could improve comfort more than a servant – to do the cooking, cleaning, laundry, fetch the mail, etc. And because these robots are servants, so they must be in the shape of a human being. Unlike real servants, however – and here lies their obvious superiority – they don’t require wages, food or rest, will always perform tasks to the high standard required, and will never be lazy, sullen, unresponsive or rebellious. In other words, robots are perfect slaves, but without offending anyone’s delicate morals. This could, however, be taken too far, as in Jack Williamson’s ‘With Folded Hands…’ (1947), in which robots do such a good job of looking after humanity that the race becomes too weak to survive without them. Or they could prove so ubiquitous that some humans might believe they were robots themselves, as in Margaret St Clair’s ‘Asking’ (1955) – although once the protagonist learns her true nature, she adopts all the arrogance of a slave-owner toward robots.


In the real world, robots are entirely different. They’re more often referred to by a name specific to their purpose, such as a Computer Numerical Controlled Machine or Autonomous Underwater Vehicle or space probe. They’re built for specific tasks, or to perform within specific spheres of operation; and programmed only for that task or for that sphere. They’re used in situations that are too dangerous for human beings – eg, AUVs and space probes – but they’re not capable of everything a human could do. Or they’re used to perform repetitive tasks more quickly, more frequently and more accurately than a human could. In such cases, building robots in the form of a human being is not an advantage.

Science fiction, however, rarely shows robots as CNC machines, AUVs or space probes, but almost always as anthropomorphic machines. (Although Star Wars didn’t – not only is R2-D2 one of the most famous robots in sf cinema, but remember the variety of robot forms in the Jawa Crawler?) The SF Encyclopedia claims robots have proven popular in sf cinema because they can be played by human actors. (These days, of course, they’re done using CGI.) But in written sf? Why this insistence on human form? Why this need to present them as mechanical humans? After all, pretending robots are human is effectively treating them as an underclass, as slaves. If they are human in all but origin – something which applies just as much to artificially-created persons, such as the title character in Paolo Bacigalupi’s The Windup Girl, – if they are human to that degree, then to treat them as not-human is no more than scientific bigotry, it’s the sort of immoral rationalisation used by owners of slaves.


There are certainly science fictions featuring robots which question the morality of their existence, but they’re uncommon. Asimov used his robots to solve simplified moral conundrums, based around his Three Laws, which are themselves a moral code reduced to a single dimension – a moral code, that is, which does not question the existence or ownership of robots. Implicit in the use of anthropomorphic robots in almost every science fiction is an acceptance of slavery. And, to make matters worse, such robots are often then dehumanised – Cylons referred to as “toasters” in Battlestar Galactica, for example. Having created these ersatz people and enslaved them, they need to be reduced to the status of machines in order to justify ownership. They’re the people we demonise because we want to excuse our poor treatment of them, because we want to justify our belief that they are inferior to us. Much like the Tories are doing to the poor and unemployed in 21st Century Britain – calling them “skivers” and “scroungers”, as if it is their own fault, it is something they’ve done themselves, which means they’re not as good, not as human, as everyone else.


And speaking of Cylons, they’re another form of robot common in science fictions: the killer robot. Arguably, these sorts of robots are more common in twenty-first century science fictions (horribly old-fashioned Hugo-nominated stories by Mike Resnick notwithstanding). Robots make an excellent enemy because they are implacable – unlike humans, or even aliens, they will not stop, they cannot surrender, and you can destroy as many of them as possible without worrying about the morality of it all. Likewise, generals can sacrifice countless numbers of robots for the most trivial of gains, and it doesn’t really matter since they’re little more than smart bombs. It’s the machine-nature of war-robots that is stressed, and not their human-like qualities. Owning people, it seems, is fine in sf, but the genre still feels some small qualms at killing them in great numbers.

Of course, real robots are not people. No matter how sophisticated their programming, the code which drives them is still a series of IF and WHILE and FOR loops. Any operation they perform must be part of their programming… or they can’t do it. Even if they do have the right snazzy tool fitted to one of their manipulator arms. Smartphones are pretty damn clever devices, but no one would ever consider them more than a machine. The same is true of supercomputers, Voyager 1, Curiosity, a UAV or those dancing industrial robots in that old Volkswagen advert.


Perhaps people think there are no dramatic possibilities, other than in military sf, in robots-as-machines. Perhaps that’s why authors and film-makers have their robots look and behave like human beings. But once upon a time, science fiction’s spacecraft all used to resemble pointy rockets, of the sort painted by Chesley Bonestell in those Collier’s Magazine articles by Wernher von Braun. Look at the cover art of any late twentieth century or twenty-first century science fiction novel, however, and you’ll now see a huge variety in sizes, shapes and designs of spaceships.

What I think would be interesting would be to ditch the anthropomorphic robot, the ersatz human, with all its dodgy moral baggage, and instead treat robots as they actually are – like space probes, CNC machines, UAVs: ie, accept that they are products of their programming, they are tools, very sophisticated tools, but ones which can only perform tasks for which they have been designed and programmed. After all, it’s the twenty-first century, we shouldn’t be presenting worlds in which people, artificial or otherwise, are enslaved; we should be creating visions of the future in which technology plays a true role, is not just setting or a piece of hand-wavery used to justify magical maguffins. Far too many science fictions use genre tropes as little more than window-dressing for stories based on historical templates and loaded with historical baggage.


Science fiction under pressure

As a species, we have little experience with naturally hostile environments, a century’s worth perhaps. By “hostile”, I don’t mean environments such as the Arctic, which are uncomfortable, or could prove fatal without basic survival tools. I mean environments which are pretty much instantly lethal without complex technological assistance. Human beings have to date visited two: space (including the lunar surface) and the sea deeper than 200 metres below the surface (it’s actually shallower than that, but the depth record for free diving currently stands at 214 m).

A scene from Luc Besson's The Big Blue

A scene from Luc Besson’s The Big Blue

Science fiction has covered the first of these in countless stories and novels, with varying degrees of accuracy. But no reader of sf doubts the hazardous nature of outer space. While all too many science fictions present magical technology allowing human beings to live and work and make war in space, there’s still a background of ever-present danger. In fact, it’s almost become a cliché.

But what of the opposite extreme? High atmospheric pressure rather than vacuum? Certainly the former have been covered in science fictions, though the genre tends to treat it as much the same as the latter – ie, both are survivable when wearing a spacesuit. But spacesuits are actually just personal spacecraft, designed for the same environment as spacecraft – ie, space. (If that’s not belabouring the point a bit much.) They provide a self-contained atmosphere and protection from radiation. A spacesuit wouldn’t work on a planetary surface with a datum pressure of, say, 50 atmospheres. It would be unwearable, constricted by the gas pressing against every square centimetre, its joints locked since they are designed to maintain a constant internal volume. When submarines get squished when they sink too deep in the sea? That’s what would happen to a spacesuit… and the person inside it.

A JIM suit

A JIM suit

Which doesn’t mean hyperbaric environments would necessarily be out of reach. One solution would be to use an Atmospheric Diving Suit, which is much like a spacesuit but designed to keep pressure out rather than in. The current depth record in an ADS is 610 m (2000 ft), which is 61 atmospheres. Perhaps with the advent of new and stronger materials, or some sort of force-field, environments with much higher pressures would be accessible to someone in an ADS.

Chief Navy Diver Daniel P Jackson in the Hardsuit 2000

Record holder Chief Navy Diver Daniel P Jackson in the Hardsuit 2000

The only recent example that comes to mind of a sf novel set (partly) on a world with a hyperbaric environment is Alastair Reynolds’ On the Steel Breeze (2013), the second book of his Poseidon’s Children trilogy. Several chapters take place on the surface of Venus, which, as well as a mean surface temperature of 462° C, has a surface pressure of 92 to 95 atmospheres. In the novel, some of the characters go EVA on the surface, an apparently not uncommon pasttime, in “surface suits”:

The suits were essentially ambulatory tanks. They were glossy white, like lobsters dipped in milk. They had no faceplates, just camera apertures. Instead of hands, they had claws. Their cooling systems were multiply redundant. That was the critical safety measure, Chiku learned in the briefing. Death by pressure was so rare that it had only happened a few times in the entire history of Venus exploration. (p 128)

Clearly – refrigeration aside – Reynolds’ surface suit is much like a beefed-up ADS, and in no way resembles a spacesuit. Which is as it should be.

But what if a closer interaction with the environment is required? Perhaps there’s a need for something more dextrous than “claws”? Or human beings must be as unencumbered as possible in order to live and work in this hyperbaric environment. Obviously not the surface of Venus, but perhaps somewhere less extreme…

Theo Mavrostomos at a simulated depth of 701 m

Theo Mavrostomos at a simulated depth of 701 m

You can saturate a human body up to pressures around 70 atmospheres – that’s the current record, set during a simulated saturation dive by Theo Mavrostomos in 1992. He spent two hours at a depth equivalent to 701 metres (2300 feet). The term “saturation” means the person’s tissues have absorbed the maximum possible partial pressure of gas. A sudden return to normal atmospheric pressure would result in explosive decompression. A too-quick return would cause the absorbed gas to bubble out of the person’s tissues – the “bends”, or decompression sickness, which can be fatal. There are other hazards associated with hyperbaric environments. At pressures above 5 atmospheres, nitrogen causes nitrogen narcosis, or “the rapture of the deep”; and at pressures higher than 15 atmospheres High Pressure Nervous Syndrome can affect people breathing helium-oxygen mixtures.

A pair of North Sea saturation divers

A pair of North Sea saturation divers

High pressure air is extremely difficult to breathe – not just the physical act of drawing it into the lungs, but also the lungs diffusing it into the blood. By using a less dense gas, such as helium, to maintain the correct partial pressure of oxygen (too much oxygen is poisonous), the human body can handle greater pressures. But this also presents its own set of problems – there’s HPNS, but also helium’s excellent conductivity of heat, not to mention the shortening of sound wavelengths resulting in the infamous “Donald Duck” voice (at the limit of saturation diving, this can make divers pretty much unintelligible). HPNS can be mitigated by adding some nitrogen back into the mix, and “unscramblers” are used on the radio links to divers but these are not wholly effective. There is no solution to helium’s conductivity other than bloody great heaters scattered throughout the saturation system.

At present, we’ve about reached the limit possible with saturation diving. In the oil industry, working at depths of 100 to 250 metres (320 to 820 ft) is routine. Deeper than 450 metres (1500 ft), ROVs are used. Greater pressures than 70 atmospheres may be possible – perhaps by using hydrogen, which has half the atomic weight of helium. Unfortunately, hydrogen is extremely flammable, although some helium could be added to render it safe. French diving company Comex consider it possible to reach depths of 1000 metres (3281 ft), or 100 atmospheres, using a hydrogen mix, but no one has tried and there’s currently no impetus to do so.

A still from Ridley Scott's Alien

A still from Ridley Scott’s Alien

Where this gets interesting is that, as far as I know, no one has used this in science fiction. While hyperbaric environments, or dense atmospheres on high-gravity planets, perhaps even gas giants, have undoubtedly been used, it’s either been with some science-fictional equivalent of a ROV, or magical spacesuits which operate as well in 100 atmospheres as they do in a vacuum, or perhaps even a kind of armoured suit capable of withstanding great pressure like a souped-up ADS. Dense atmospheres seem mostly to appear in science fiction only as settings for winged aliens or humans, such as in Vonda N McIntyre’s ‘Fireflood’ (1979) or Wings’ (1973; see here). Gas giants are quite common in sf, though mostly the action takes place in their upper atmosphere. One that doesn’t is Poul Anderson ‘Call Me Joe’ (1957), in which a disabled operator “drives” a ROV on Jupiter’s surface – James Cameron used a similar idea in Avatar (2009).

But sf typically treats alien worlds – what we now call exoplanets – as either extensions of space, or Earth-like, or near enough Earth-like not to make any difference. Those hardy explorers of countless science fictions often have little more to deal with than inclement weather, although perhaps one or two might need a breathing mask… No one has ever thought of the Earth’s surface as remotely like space – it’s an environment entirely distinct, and although it covers a wide range of conditions they’re all survivable. So why no variety in alien worlds? Ignorance initially, almost certainly; but then it becomes about the story, about some “alien” aspect of the exoplanet which drives the plot, as in Marion Zimmer Bradley’s Endless Voyage (1975; see here). Yet, allegedly, science fiction is about science and technology, and how we use it…

Mars Arctic Research Station

Mars Arctic Research Station

Surely it would be more interesting to explore the techniques and technology that might be used to explore, or perhaps even colonise, an environment that is neither Earth-like nor vacuum? A saturation system strikes me as a perfectly suitable method to use in a hyperbaric environment; and one that is filled with dramatic possibilities. Just think, you could murder someone by knocking them out and them putting them in a balloon’s gondola… Too much science fiction, to my mind, fails to get across the true experience of the strange environments in which it takes place. It’s passed off as “setting” using a few incidental details, but in all other respects treated as if it were, say, middle America, or the Wild West. A more rigorous approach to such things would be far more interesting.

Of course, it’s not just exoplanetary environments. There’s certainly science fiction set underwater at great depth (see my earlier blog post on the topic here), but most such sf imagines that human beings have been physiologically engineered to survive in that environment. But, as far as I’m aware, no one in sf has made the mental leap from deep sea to hyperbaric planetary surface.

1 Comment

Aiming Deep

I don’t normally write about television series here – in fact, I think I’ve only done so less than half a dozen times in the past. And usually then it’s about programmes I really like and think are very good – which would be, in no particular order, Battlestar Galactica, Waking the Dead, Scott & Bailey, In Plain Sight, Fringe, Twin Peaks, The X Files, Life on Mars / Ashes to Ashes, and Space Odyssey: Voyage to the Planets. (I make no apology for the last of those.) However, on this occasion, I’m going to write about something I didn’t think was very good at all.

Last weekend, I watched all five episodes of The Deep, a Tiger Aspect Productions serial originally broadcast on BBC 1 during summer 2010. Much like the movie Sphere – with which it shares some similarities – there are some neat ideas in The Deep, and a setting that could be really cool…


The neat ideas first: 1) exploring vent-fields beneath the Arctic icecap, and finding a thermophilic biodigester which produces biogas with unprecedented metabolic efficiency; and 2) discovering that the Russians have been secretly drilling for oil under the seabed in a UN Exclusion Zone beneath the icecap (shades of Frank Herbert’s The Dragon in the Sea?). Idea 1 provides the motive for the expedition to visit the vent-field on the Lomonosov Ridge and a satisfyingly earth-changing end-game. Idea 2 gives us the villains and the obstacles they present which the protagonists must overcome in order to win through to the end.

The setting is 2,000 feet deep in the Arctic Ocean. So the cast are confined to the interior of submersibles and/or submarines. At that depth, the pressure is around 70 atmospheres. Submarines make for really dramatic environments – they’re claustrophobic and subject to unforeseeable external hazards; and in this case, they’re high-tech too. The Deep features three such vessels: Hermes, a research submersible, which disappears with all hands at the start; Orpheus, a second research submersible which is sent six months later to continue Hermes’ research and also discover her fate; and Volos, the giant submarine the Russians are using as a base of operations for their illegal drilling. Each vessel also carries a mini-submersible, single-person but they can carry two at a squeeze.

So far so good. Orpehus arrives at the Lomonosov Ridge, discovers the wreck of Hermes, but is then disabled – and one of the crew killed – by… something. They are captured by Volos, but the Russian submarine remains silent. Aboard Volos, the Orpheus crew discovers all but two of its crew dead, cause unknown. The two survivors try to commandeer Orpheus, but she’s going nowhere because her systems are down. These are fixed by salvaging “the motherboard” from Hermes. But, oh no, the nuclear reactor aboard Volos is over-heating and will soon explode. Except there are other survivors aboard Volos, including a member of Hermes’ crew. It’s a race against time to rescue them before the Russian sub blows up.


Which happily it doesn’t, as one of the crew does a Spock and saves the day (at the cost of his own life). Oh, and the thing that killed the Volos’ crew and disabled both Hermes and Orpheus proves to be… a giant underwater radar. Which the Russians were using to probe beneath the seabed and find oil deposits.

Only now there’s another problem. That thermophilic biodigester is really important, but all the samples aboard Volos are dead. Fresh ones are required… from the oil well at the bottom of the nearby Laurentian Abyss. Well, they call it the Laurentian Abyss, and claim it’s 8,500 feet deep; but the real Laurentian Abyss is closer to 20,000 feet deep. So they have to go and get another sample. But the captain of the Volos won’t let them go, and in fact plans to use the giant underwater radar to destroy Orpheus. But they defeat him. And go and fetch another sample of the thermophilic biodigester by lowering a one-person sub into the well itself. And then the Volos blows up. And the good guys – well, the ones that are left – escape.

There you have it: five sixty-minute episodes of nail-biting underwater drama… Except. There’s just so much that is plain wrong in those five hours that the entire serial can’t help but sink into the abyss…

Those mini-submersibles I mentioned… They’re carried inside each of the vessels, and leave it via a moon pool. At a depth of 2000 feet, at a pressure of 70 atmospheres. So the interior of Hermes, Orpheus and Volos would also have to be pressurised to 70 atm… or be instantly flooded. We’re informed the crew are breathing “neonox”, a neon-oxygen mix, at high pressure, so, you know, it’s a little bit plausible. The current depth record is held by Theo Mavrostomos who, as part of Comex’s HYDRA 10 experiment in 1992, spent 3 hours at 2,300 feet (71 atmospheres) in a hyperbaric chamber on land. But the entire experiment took 43 days: 15 days compression, 3 days at 68 atmospheres, and 24 days decompression. There are no two weeks of compression in The Deep.


It’s borderline plausible – one man has spent 3 hours at 71 atm and survived, but that was 20 years ago. However… there’s no reason why any of the subs should have a moon pool. The mini-submersibles could just dock to a hatch. So then the interior could be pressurised to 1 atm. Just like real-life submersibles. In Sphere, the film adapted from Michael Crichton’s novel, the underwater habitat is 1,000 feet beneath the surface, but it has a moon pool. However, it’s needed because the cast go saturation diving. They go out into the water. No one does in The Deep.


And, of course, nuclear reactors don’t explode when they overheat. Nor do they require the control rods to be inserted by hand – as they must be aboard Volos (hence, the Spock scene). The US Navy has been operating nuclear-powered submarines since 1954, and the Russians since 1959. Several have been lost with all hands. None have exploded. (Incidentally, it’s never mentioned what powers Orpheus. Really really powerful and long-lasting and giant and heavy batteries, I imagine.)

Then there’s that giant underwater radar. And numerous mentions to “calling on all frequencies” by various members of the subs’ crews. Radar doesn’t work underwater. That’s why they use sonar. And radio doesn’t work very well below the surface either. Various navies have used extremely low frequency radio for communication with submarines (ie, with wavelengths of several thousand kilometres), but it’s expensive and technically difficult. Which is why acoustic transmission is the most common form of communication with vessels underwater.

And when the high-powered radar waves hit the Orpheus and shorted out its systems? That’s because it “reversed the polarity” on the motherboard. That’s what one of the characters actually says. And it seems Orpheus has one motherboard through which everything must be routed – not just for its failure to totally disable the sub, but also to allow it to be fixed in one fell swoop later. Never mind building in redundancy…

But, you cry, these are piffling! What do I care about HYDRA 10 or nuclear reactors going boom? The Deep was jolly exciting drama and those are mere trivial details. After all, the moon pool looked pretty neat, so what does it matter if no real submersible could descend to 2,000 feet with one? Or even to 8,500 feet.

As for the other niggles, they’re even more trivial. So what if one of the Russians lights up a cigarette at 70 atms pressure? So what if another character declares Volos, at 300 metres long, larger than any surface vessel – when both supertankers and US Navy aircraft carriers are all over 300 metres in length (and the largest supertanker ever built, Seawise Giant, was 458 metres long)? So what if a marine biologist is asked to do an autopsy and seems to know what he is doing, despite saying he’s only ever dissected a rabbit for his Biology GCSE? So what if the thermophilic biodigester produces nitric acid as a byproduct of its metabolic process, and the acid has been corroding all the subs’ hulls  – but the concentration would be so weak in, like, the Arctic Ocean that it couldn’t even corrode tissue paper? So what if the underwater well, from where they fetch the fresh sample, is a hole several metres in diameters and when have you ever seen an oil well that large or even a drill bit? That’s less than trivial! It is meaningless.


There were problems with the story itself, true; and with the script. Characters telling each other stuff they should already know – “We’re breathing Neonox, a mixture of neon and oxygen”, “That’s a vent-field”, etc. Not to mention a dramatic scene resulting wholly from the fact two switches had been swapped over but their labels had not been changed.

My point is that the details I’ve mentioned above could all be easily checked. And putting them right would not have affected the story (although a hatch doesn’t look as cool as a moon pool, I’ll grant). But when you leave stuff like that in, it will annoy some people and you will lose them. Why not get it right and keep them? No one’s saying it should be, “That submarine must be 300 metres long, that’s nearly as long as a supertanker or a US Navy aircraft carrier, but not as long as Seawise Giant, which was 458 metres long.” Because that would be silly. Instead of, “That submarine must be 300 metres long, that’s longer than anything you’ll see on the surface,” why not, “That submarine must be 300 metres long, that’s really big for a submarine”?

The giant underwater radar is more problematical as it’s a plot device. Something has to generate the EMP which leaves Orpheus dead in the water, something has to kill the crew of Volos. There’s a lovely line in the Wikipedia article on offshore geotechnical engineering, which goes, “For the sub-bottom stratigraphy, the tools used include boomers, sparkers, pingers and chirp.” The article explains that geophysical surveys make use of a combination of sonar and seismic refraction, so perhaps one or more of those might have been used instead of the implausible giant underwater radar.


When I started this post a few days ago, it was with the intention of just pointing out some of the howlers in The Deep. But yesterday’s discussion on Twitter suggested to me there’s a wider point to make. When you’re writing, there’s stuff you make up and stuff you look up. And if you don’t know which is which, then perhaps you need to rethink your story. Never assume your readers won’t spot it when you’ve got details wrong. It’s perhaps forgiveable when the knowledge required is arcane or difficult to find. But the simple stuff? Characters using the Jubilee Line on the London Underground in 1940, 37 years before it was built? Characters referring to the Paras as “redcaps”, when that’s the nickname of the Royal Military Police? Why would a writer not bother to look these things up? If they’re that lazy with the details, what does that say about the story, or the novel, as a whole?

You can’t, as they say, please all of the people all of the time – but you can at least make an effort to please as many as you possibly can. If I’m writing and I want something to happen in my story but I’m not clear on the details, then I look them up. I don’t just wing it and hope no one notices. This does not mean every story needs to be fact-checked. It’s not always necessary. I wrote a story about an ATA pilot who flew Spitfires, so I researched both. I wrote another story set in an unnamed town during an unnamed decade (which sort of resembles the 1940s) – no research was necessary. If a story is set on an invented world in an invented galactic empire, then there’s not much you can look up anyway. But if it’s set in London, or Belfast, or beneath the Arctic icecap – then it’s time to get googling.

The internet is an amazing tool, so why not make use of it? Pretty much all of the information mentioned in this article, I found online. And if I could find it, so could anyone…


On genres, modes, distances and invention

I won’t say where, or on what, I was at the time but this weekend I was thinking about definitions of hard science fiction for a podcast, and my thoughts spiralled out from there to definitions of science fiction itself. And it occurred to me that sf narratives break down into three rough forms: encountering the Other, embracing the Other and rejecting the Other. And the more I thought about it, the more it seemed to hold true. Think of a random sf novel, like… Dune. That’s embracing the Other – both Paul Atreides becoming a Fremen and learning to  use his new-found powers.

Since its earliest days, science fiction stories have been characterised by distance just as much as they’ve been characterised by science and/or technology. Alongside the Gernsbackian tales of new inventions which would improve the lives of all were stories of alien places and the strange peoples found there. Distance is a signifier for the “exotic” (in both meanings of the word). Before science fiction, they told tales of the South Seas.

The further away a place is, the more Other it is – it’s a simplistic formula, but this is pulp fiction, after all. The difficulty of the journey is less important than the distance travelled. There are very few Shangri-Las hidden in inaccessible mountain valleys, or their galactic equivalents, but lots of worlds on the rim of the empire or the edges of the galaxy. Travel itself is not uncomfortable, but does take time. Real spacecraft are small and cramped, with no amenities. Sf’s starships are interstellar ocean liners with cabins and restaurants and promenades. This is because the journey does not matter, it is only a metaphor. If there are hardships, they are associated with either finding the destination, or at the destination itself. Off the top of my head, the only sf story I can think of in which the journey itself is an obstacle is Ursula K Le Guin’s ‘The Shobies’ Story’ (in Gwyneth Jones’ Buonarotti stories, and her novel Spirit, there’s a similar effect with interstellar travel, but it does not make the journey an obstacle). No doubt there are other stories, though I maintain such stories are rare within the genre.

But then, there’s not much that’s Other about the act of travelling from A to B. Even in the Le Guin story mentioned above, the means of making the journey affects the travellers’ perceptions of their destination, making the act of encountering, or even embracing, the Other so much harder and more prone to misunderstanding.

Space opera, of course, is traditionally predicated on rejecting the Other, as is military sf. The drama in both subgenres typically derives from conflict, either from within the world or from without. And the further the enemy is from known space, the more Other they generally are. Even when they’re humans, they’re typically barbarians from the edge of the empire – though that may simply be science fiction ripping off the history of the Roman empire… which it has done far too many times.

The same argument might well apply to fantasy, even though it is a different genre. I suspect there are more narratives of rejecting the Other in epic commercial fantasy than of the other two forms. Given its generally consolatory nature, this is no surprise. Other modes of fantasy may well be more evenly distributed – I’m not as well read in fantasy as I am science fiction. It might well be that the same argument does not apply to fantasy, given that it is an entirely different genre to sf.

Science fiction is not, and has never been, a branch of the fantastic. You can’t categorise fiction by the degrees of invention it exhibits. All fiction by definition contains invention, whether it’s literary fiction with made-up characters , fantasy with made-up worlds, or science fiction with made-up science and/or technology. Nor can you categorise by trope… because first you would have to define each and every trope. And lay out the conditions under which each trope is fantasy and not science fiction, or vice versa. If a fantasy novel has a dragon in it, then it does not follow that all novels containing dragons are fantasy. And so on. Science fiction is a fundamentally different genre to fantasy, and it’s an historical accident that the two are typically marketed alongside each other.



Public speeching

Last week, I was invited to give a talk – along with two other speakers – to the University of Sheffield Natural History Society. The topic was “science in science fiction”. This wasn’t quite the same as my only previous other public engagement, at the National Space Centre in February. This wasn’t a reading, it wasn’t about my books. So I had to write a new speech. And presentation slideshow. I stuck to a similar topic, however: real space and space travel and how science fiction has traditionally been getting it wrong.

Despite a couple of technical problems, the talk went well. First, Pieter Kok, Senior Lecturer in Theoretical Physics at the university, spoke about time travel and showed how to solve the grandfather paradox using quantum mechanics. Then it was my turn. And finally, David Kirby, Senior Lecturer in Science Communication Studies at the University of Manchester and author of Lab Coats in Hollywood, talked about the use of science consultants in Hollywood films. We then had a short Q&A session.

It was a fun evening. I don’t think my delivery was as polished as it could have been – I’m still not used to public speaking. And I did feel really old sitting in a venue full of students. A couple of them spoke to me afterwards – I think I may have upset them with my talk. I was a little dismayed that most of the sf novels they mentioned were all a good twenty or thirty years old, though one did name Ken MacLeod’s Learning The World. The society then laid on a barbecue, but because it was raining they just bought food into the venue – a burger, corn on the cob and coleslaw. I spoke to a couple of lecturers who were present, and then caught the tram home in time to watch the +1 edition of that night’s episode of In Plain Sight.

And here is the talk I gave (I’ve inserted the slides as jpegs):


My name is Ian Sales and I write science fiction. But you won’t find any of my books in the local Waterstone’s as I’ve yet to sell a novel to a publisher.

But I have written and published two parts of a quartet of novellas, called the Apollo Quartet: Adrift on the Sea of Rains and The Eye With Which The Universe Beholds Itself.

Adrift on the Sea of Rains won the BSFA Award in the short fiction category earlier this year.

I’ve had short stories published in a number of anthologies and magazines, and last year I also edited an anthology, Rocket Science, for Mutation Press.

Tonight, I’ll be talking about space and space travel in science fiction literature.

You probably all recognise this quotation – in fact, most of you, even the non-sf readers, have probably read the science fiction novel in which it appears. And yet, despite the vast, huge, mind-boggling bigness of space, Arthur Dent, Ford Prefect, Trillian and Zaphod Beeblebrox zip about the galaxy as if it were no bigger than the South Seas.

But space really is big.

Last month, Voyager 1 – the most distant human-made object from Earth, some 18 billion kms away – left the Solar System. It’s not aimed at any particular star but it will pass within 15 trillion kilometers of Gliese 445, 17.6 light years away.

At its current speed of 38,000 kph, it’ll reach there in 40,000 years.

The fastest human-made objects ever built were the Helios-A and -B space probes, launched in 1974 and 1976 by West Germany and NASA. They reached a velocity of 252,792 kph. That’s London to New York in 79 seconds.

The fastest human beings ever were the crew of Apollo 10, who hit 39,897 kph during their return from the Moon. That’s London to New York in 8 minutes and 20 seconds.

Our nearest star is Proxima Centauri. It is 4.24 light years away, 4 years and 3 months at light-speed. But those Helios probes, the fastest objects ever built…

… they only reached 0.000234% of light speed. It would take them 13,000 years to get there. If Voyager 1 were heading toward Proxima Centauri it would take it nearly 74,000 years.

So, you see, space is really really really big.

But you wouldn’t know it if you read science fiction. In novels by Iain M Banks, Peter F Hamilton, Lois McMaster Bujold or Elizabeth Moon, humans or aliens flit about the galaxy in starships, travelling from planet to planet in either hours, days or weeks.

But space in science fiction plays a metaphorical role. It is a signifier of distance. And distance itself is a measure of strangeness or exoticism.

In science fiction’s early days, Mars was a common locale for stories – not just Edgar Rice Burroughs’ A Princess of Mars in 1912, but also Robinsonades like Rex Gordon’s No Man Friday from 1956.

However, as scientists learned more about the Red Planet, so it became closer and less exotic. Locales in sf moved further afield. But by that point, the limits of the knowledge of the time had been reached, so imagination took over. The worlds were made-up, with no basis in reality. The universe itself became a fiction.

And that’s how science fiction continues to treat it.

Because it’s all about distance.

To Westerners of yore, the South Seas were exotic. And I mean that just as much in its demeaning colonialist definition as I do its less provocative meaning. Africa, South America – they were the same. Both were a long way away – weeks or months by sea travel. Science fiction authors just substituted weeks on the open sea with weeks in a spaceship.

Which is why spaceships in science fiction pretty much resemble ocean-going ships.

They have bridges, they have crew stations for everything from communications to navigation. They have cabins and wardrooms and storerooms. They have captains and first officers and chief engineers.

Real space travel isn’t like that at all.

Back in April 1961, just over fifty years ago, the human race sent someone into space for the first time. Yuri Gagarin orbited the earth in a steel ball 2.3 metres in diameter. That’s about as unlike an ocean-going ship as you can get.

This is the Skylon, a spaceplane being developed here in the UK by Reaction Engines Ltd. It can carry passengers, but it doesn’t have any crew. It’s completely automated.

The Boeing X-37B is robotic.

Even the Soyuz is chiefly controlled from the ground.

Real spacecraft are tiny.

On each of nine Apollo missions, three men travelled to the Moon in a command module with an interior volume of 5.9 cubic metres. That’s about the same as a Ford Transit van.

The Soyuz is even smaller – the re-entry module is only 2.5 cu metres. It’s so small, in fact, that in order to fit in three seats, the centre seat has to be set back from the other two – so the person sitting in it, the commander, can’t even reach the control panel. They have to use a small stick to press the buttons.

There are other issues, as well. It’s all very well travelling to other stars and planets at physics-busting speeds, but it’s no good to you if you arrive there dead.


Given current technology, a fast transit journey to Mars would take about 150 days. It would be expensive, of course – vastly, hugely, mind-bogglingly expensive, in fact. But we don’t know yet how to keep those astronauts alive. We have yet to build a Closed Environment Life Support System capable of keeping human beings alive in space for any useful length of time.

Our only beachhead on the real universe, the International Space Station, requires around eight supply missions per year. And it’s only 400 km away.

But even before we take that first step, we have an obstacle to overcome. And it’s a biggie.

Our gravity well.

The best method we have to date for throwing things into orbit is a chemical rocket. And it’s horribly inefficient. You have to chuck away most of the rocket to get off the planet. It took 2.3 million kilos of Saturn V to send 45,000 kg to the Moon. That’s throwing away 98% of the total mass.

Worse, rockets are limited by the very science which makes them possible.

This is the rocket equation.

The important variable here is ve, the effective exhaust velocity. (It’s “effective” because, for obvious reasons, it’s lower in atmosphere than in vacuum.) The problem with exhaust velocity is that it’s determined by the propellants used in the rocket, and there’s only so much energy that can be generated from a chemical reaction involving two specific propellants. You can’t magically make dinitrogen tetroxide and a 50/50 mixture of hydrazine and unsymmetrical dimethylhydrazine generate more energy than it does. Chemistry doesn’t work like that. Those, incidentally, were the fuels used by the Saturn V to send men to the Moon.

Because getting into orbit is so inefficient, it’s correspondingly expensive, between five and ten thousand dollars per kilo. Which means you need to make the most of what you can throw up there. Spacecraft are tiny because every kilo counts. You don’t want to waste valuable weight on cabins and wardrooms.

Of course, if you had some magical means of propulsion that could power your spaceship to escape velocity without all that chemical inferno, then it would be a different matter. But we don’t, and science fiction has a bad tendency to gloss over that lack. Authors wave their hands and invoke the phrase “anti-gravity”, but really it’s not at all scientific.

The same is true of interstellar travel.

Science fiction likes its hyperspace drives and warp drives and FTL drives and such, but they’re about as scientific as an Infinite Improbability Drive. Even theoretical ones like the Alcubierre Drive would require more energy to operate than actually exists in the universe, so that’s not going to happen any time soon.

Which begs the question – how important is the science in science fiction?

There are science fiction novels which contain bona fide science, or have premises based on real science:

… like Greg Benford’s Timescape or Poul Anderson’s Tau Zero or Gwyneth Jones’ Life or anything by Greg Egan. But they’re more the exception than the rule.

You can’t even say that once upon a time sf stories were all about the science, even though the inventor of the genre, Hugo Gernsback, described science fiction in 1926 as:



Science fiction was born in the white-hot enthusiasm for technological progress implicit in the electronics magazines of the 1920s. But few of its purveyors were trained scientists, and when the genre was repositioned at the end of that decade as yet another form of pulp adventure fiction, whatever scientific credibility it had demanded subsequently lapsed. Since then, it could be said science fiction has been little more than a  mechanism for delivering bad ideas to impressionable members of society.

In other words, science fiction is, and always has been, scientifically bankrupt.

Happily, the genre’s name comprises two words, and if the genre has long since lost the intellectual rigour demanded by one of those words, it has at least always been driven by the second. Science fiction is fiction, it is…

stories. And it is in its approach to those stories that it comes closer to science than any other mode of fiction. It posits a rationalist scientific worldview. It might fumble the details, or just make them up out of whole cloth, but it recognises that the real universe is a place where…

… physics and chemistry and biology and such all hold sway. It may use magical science and technology, but it’s still science and technology, it is still assumed to work like science and technology. It doesn’t work because. It doesn’t require divine powers or chicken entrails or a magic hat.

Despite the fact science fiction gets it wrong so frequently and so consistently, I still prefer to call it that and not “speculative fiction”. All modes of fiction are essentially speculative. Telling stories is a way of speculating about something. By unpacking the abbreviation “sf” as science fiction, it tells us it’s a mode of fiction which views the world with a scientific eye – even if its actual scientific record is pretty damn poor…

As I’ve outlined, we have a fifty-year tradition of real space travel, but science fiction insists on using its ocean-going ships in space.

We know the universe is even more vast, huge and mind-bogglingly big than Douglas Adams could even imagine, but science fiction still pretends interstellar distances are crossable within a human lifetime.

Here’s an example of that mind-boggling bigness:

… the Sculptor Wall is a superstructure of galaxies. It’s 370 million light years long, 230 million light years wide and 45 million light years deep. That’s millions of light years.

In kilometres, that’s 3,500 with 18 zeroes after it. And the Sculptor Wall’s not even the largest superstructure we’ve found.

The more science tells us about the universe, the less significant we discover we are. By manipulating our sense of scale, science fiction puts us back where we want to be – at the centre. Important. Sf humanises a universe which is completely indifferent to us.

And, in order to do that, science fiction writers all too often fall back on metaphors that they, and their readers, find comfortable. The chemist’s down the road.

Spaceships with a captain sitting in a big important chair on a bridge. The real world isn’t like that, real space travel isn’t like that, real interstellar distances aren’t like that.

As Korzybski might have said, “the metaphor is not the thing itself”. But use that metaphor too much and too often, and it might as well be – even if it has become completely decoupled from the thing it metaphorises.

Of course, it may well be that we’ll hit a Kuhnian paradigm shift sometime in the future and render everything I’ve said so far completely irrelevant. It may well be that all those science fiction novels of galactic adventure really are maps of the future.

But I’m not holding my breath.



Metaphorising the metaphors

To some people, science fiction is a toy-box packed with neat gadgets and shiny gewgaws, which they pull out and deploy in service to their story. They need, for example, a locale in which certain events happen a certain way, so they invent an alien world. That alien world needs to be distant, so some form of travel to reach it is required. And since distance in most people’s minds equates to time taken to reach the destination, some type of long-journey travel is required. To early writers of science fiction, there was only one model they could use: sea travel. And that worked pretty well because distant lands were exotic, and the distance – ie, journey time – itself was a signifier of exoticism.

Initially, Mars was pretty distant, but as we learned more about the Red Planet, so it became closer and less “colourful”. Locales in sf thus moved further afield. But by that point, the limits of the knowledge of the time had been reached, so imagination took over. The worlds were made-up, with no basis in reality. The universe itself became a fiction.

We now know a great deal more about the universe than we did in the 1920s and 1930s. We know that it is unimaginably vast, that the distances between stars preclude any meaningful relationship in human terms. The universe is no longer a fit place on which to map distant shores and strange new lands.

We also have over fifty years actual space travel, and we know how difficult it is to keep alive in space the fragile human organism and to travel useful distances in useful times. We also know there is an enormously expensive barrier between our world and the rest of the universe: our gravity well.

The spaceship-as-ocean-liner trope belongs to the fictional universe, not the real one. But the metaphor for the journey to far-off places has become so embedded in genre that it’s used as if it were no more than setting – as if it were a signifier of the genre itself. And while sf writers over the decades have rung a variety of changes over the spaceship trope – inventing new and more imaginative ways to explain how it circumvents the real universe, how it can traverse those distances beyond imagination in an eyeblink – the spaceship still operates very much as it did back in sf’s earliest days.

Except now, the spaceship trope is not enough. Now it has to be disguised, by referring to it metaphorically.

I work in computing, so the illustration of this which works best for me is that of the operating system. An OS is, according to Operating Systems Design and Implementation, by Andrew S Tannenbaum and Albert S Woodhull, a fundamental system program “which controls the computer’s resources and provides the base upon which the application programs can be written”. In the beginning, as Neal Stephenson once said, was the command line. Using it, computer operators could call on programs which would perform specific tasks. They understood that listing files from an area of the filesystem entailed reading data embedded in a magnetic media and then rendering that data in a human-readable format. But when computers moved onto the desks of business people and then into the home, that knowledge was unnecessary. Worse, it was potentially confusing. So someone invented the idea of a metaphor to represent the data on the magnetic media and the programs which performed operations on the data: the Graphical User Interface. (Invented by Alan Kay at Xerox PARC in 1973.) A GUI such as Windows or OS X or X11 is a metaphor which allows users to easily and simply perform complex operations on a computer using its built-in resources.

An interesting aside: several people have researched, and even built, orthogonally persistent operating systems. These are ones which run entirely in memory, and the complete memory-state is flashed to persistent storage (disk, flash card, etc) at regular and frequent intervals. Should the computer crash, the last memory-state image can be loaded back into memory, and the user returns to exactly where they were before the crash. The interesting thing about an orthogonally persistent operating system is that it needs a new metaphor. The existing one has become uncoupled from the underlying reality. The orthogonally persistent OS does not keep files in folders on a disk because it doesn’t need to put data way somewhere safe while it’s not in use. It doesn’t need to organise the stored data so it can be navigated. Everything is in use all the time. So it has a workspace, and everything is accessible within it all the time.

This concept of the operating system metaphor is one of the chief problems I had with cyberpunk as a subgenre – aside from its uncritical use, and tacit approval, of neoliberalism, of course. It took the metaphor that was the GUI and then layered another metaphor, cyberspace, on top of it. Cyberpunk writers wrote about the metaphor as if it were the thing itself.

And that’s what I see some twenty-first century sf writers doing. They’ve taken sf’s tropes, and are not only using them as if they were the thing itself but are adding a layer of metaphor on top. So when you dig deep into the story, you don’t find reality, you find a metaphor which has become uncoupled from its underlying reality. This is how I interpreted Paul Kincaid’s reference to “exhaustion”.

Personally, I think understanding how something works is key to learning how to do it better. It’s important to my development as a writer, I feel, to know what science fiction does, how it does it, and in what ways I can bend or break or subvert it to best effect. The uncritical use of tropes, and subsequent disguising of them, doesn’t appeal to me as a technique for writing sf. It pushes all the emphasis to the presentation layer, to the prose. Yes, good prose is important, I appreciate good writing. And I like to think my prose is good. But choosing pretty words is not enough for me.

I would sooner explore science fiction itself. I think as a genre we’ve stopped doing that. We’re either playing postmodernist shellgames, or metaphorising the metaphors, or deep-mining the genre for tropes as if those tropes were its sole raison d’être. Some might say these are indicators of decadence. Perhaps they are. But I don’t think it means science fiction is dead or dying, just that it needs a good kick up the bum…


Get every new post delivered to your Inbox.

Join 1,914 other followers