Two researchers at McGill University, Montréal, Québec, Canada, have uncovered evidence of an ecosystem teeming with giant marine predators some 130 million years ago. The largest of these predators could, quite literally, have eaten something the size of a modern orca as little more than a snack. This will make depressing reading for creationists, not only because it all happened deep in the long pre-“Creation Week” history of life on Earth, but because the evolutionary arms races that led to these giants are precisely what the theory of evolution by natural selection predicts.
It doesn’t get any easier for creationists. Just because it’s Christmas week doesn’t mean the awkward facts are going to go away, or that scientists are going to stop uncovering more of them. No matter what they post on social media; no matter how loudly they shout; or how fervently they gather on Sundays to collectively drown out their doubts, Santa is not going to deliver evidence that the Bronze Age creation myths in the Bible contain even a grain of historical truth. The problem is that truth remains true whether a creationist believes it or not, and regardless of whether their parents believed it. No amount of looking the other way or pretending the facts aren’t there will ever change that.
The palaeontologists reached their conclusions by reconstructing an ecosystem network for all known animal fossils from the Paja Formation in central Colombia. They used body sizes, feeding adaptations, and comparisons with modern animals, and then validated the results against one of the most detailed present-day marine ecosystem networks available: the living Caribbean ecosystem, which they used as a reference. The Paja ecosystem thrived with plesiosaurs, ichthyosaurs, and abundant invertebrates, giving rise to one of the most intricate marine food webs known. This complexity emerged as sea levels rose and Earth’s climate warmed during the Mesozoic era, including the Cretaceous, triggering an explosion of marine biodiversity.
Geometric and mathematical patterns on Halafian pottery.
Scientists have once again — almost certainly unintentionally — produced evidence that the Bible is profoundly wrong about human history. This time it comes in the form of pottery shards dating back more than 8,000 years to the Halafian culture of northern Mesopotamia (c. 6200–5500 BCE). These artefacts show that people were not only producing sophisticated ceramics, but were decorating them with complex mathematical patterns long before the formal invention of numbers and counting systems.
According to the biblical account of global history, Earth was subjected to a catastrophic genocidal reset, inflicted in a fit of pique by a vengeful god who had failed to anticipate how his creation would turn out. Rather than simply eliminating humanity and starting again with a corrected design, this deity allegedly chose to preserve the same flawed model in a wooden boat while drowning everything else beneath a flood so deep it covered the highest mountains. The implicit hope appears to have been that repeating the experiment would somehow yield a different result.
As implausible as that story already is, we now possess a vast body of archaeological and palaeontological evidence showing not only that Earth is vastly older than the biblical narrative allows, but that this supposed catastrophic reset never occurred. The latter is demonstrated by the existence of civilisations that predate the alleged flood and continue uninterrupted through it, as though it never happened at all. Their material remains include artefacts that would have been completely destroyed or displaced by such a deluge, and settlement sites that show no sign of burial beneath a chaotic, fossil-bearing sedimentary layer containing mixed local and foreign species.
No such global layer exists. Instead, human artefacts are found precisely where they were made and used, unaffected by any mythical torrent scouring the planet clean.
The designs on the Halafian pottery themselves are particularly revealing. They include repeating patterns — for example, binary progressions such as 2, 4, 8, 16, 32 — suggesting that this culture possessed systematic ways of dividing land or goods to ensure equitable distribution.
Photo montage of five major elements of DAN5 fossil cranium
Credit: Dr. Michael Rogers
Map showing potential migration routes of the human ancestor, Homo erectus, in Africa, Europe and Asia during the early Pleistocene. Key fossils of Homo erectus and the earlier Homo habilis species are shown, including the new face reconstruction of the DAN5 fossil from Gona, Ethiopia dated to 1.5 million years ago.
Credit: Dr. Karen L. Baab. Scans provided by National Museum of Ethiopia, National Museums of Kenya and Georgian National Museum.
Palaeontologists at the College of Graduate Studies, Glendale Campus of Midwestern University in Arizona, have reconstructed the head and face of an early Homo erectus specimen, DAN5, from Gona in the Afar region of Ethiopia on the Horn of Africa. In doing so, they have uncovered several unexpected features that should trouble any creationist who understands their significance. The research has just been published open access in Nature Communications.
Creationism requires its adherents to imagine that there are no intermediate fossils showing a transition from the common Homo/Pan ancestor to modern Homo sapiens, whom they claim were created as a single couple just a few thousand years ago with a flawless genome designed by an omniscient, omnipotent creator. The descendants of such a couple would, of course, show no genetic variation, because both the perfect genome and its replication machinery would operate flawlessly. No gene variants could ever arise.
The reality, however, is very different. Not only are there vast numbers of fossils documenting a continuum from the common Homo/Pan ancestor of around six million years ago, but there is also so much variation among them that it has become increasingly difficult to force them into a simple, linear sequence. Instead, human evolution is beginning to resemble a tangled bush rather than a neat progression.
The newly reconstructed face of the Ethiopian Homo erectus is no exception. It displays a mosaic of more primitive facial traits alongside features characteristic of the H. erectus populations believed to have spread out of Africa in the first of several waves of hominin migration into Eurasia. The most plausible explanation is that the Ethiopian population descended from an earlier expansion within Africa, became isolated in the Afar region, and retained its primitive characteristics while other populations continued to evolve towards the more derived Eurasian form.
The broader picture that has emerged in recent years—particularly since it became clear that H. sapiens, Neanderthals, and Denisovans formed an interbreeding complex that contributed to modern non-African humans—is one of repeated expansion into new environments, evolution in isolation, and subsequent genetic remixing as populations came back into contact. DAN5 represents just one of these populations, which appears to have evolved in isolation for some 300,000 years.
Not only is this timescale utterly incompatible with the idea of the special creation of H. sapiens 6,000–10,000 years ago, but the sheer existence of this degree of variation is also irreconcilable with the notion of a flawless, designed human genome. Even allowing for old-earth creationist claims that a biblical “day” may represent an elastic number of millions of years, the problem remains: a highly variable genome must still be explained as the product of perfect design. A flawless genome created by an omniscient, omnipotent creator should, moreover, have been robust enough to withstand interference following “the Fall” — an event such a creator would necessarily have foreseen, particularly if it also created the conditions for that fall and the other creative agency involved (Isaiah 45:7).
As usual, creationists seem to prefer the conclusion that their supposed intelligent creator was incompetent—either unaware of the future, indifferent to it, or powerless to prevent it—rather than accept the far more parsimonious explanation: that modern Homo sapiens are the product of a long, complex evolutionary history from more primitive beginnings, in which no divine intervention is required.
Origins of Homo erectus
Homo erectus
Homo erectus appears in the fossil record around 1.9–2.0 million years ago, emerging from earlier African Homo populations, most likely derived from Homo habilis–like ancestors. Many researchers distinguish early African forms as Homo ergaster, reserving H. erectus sensu stricto for later Asian populations, although this is a taxonomic preference rather than a settled fact.
Key features of early H. erectus include:
A substantial increase in brain size (typically 600–900 cm³ initially, later exceeding 1,000 cm³)
A long, low cranial vault with pronounced brow ridges
A modern human–like body plan, with long legs and shorter arms
Clear association with Acheulean stone tools and likely habitual fire use (by ~1 million years ago)
Crucially, H. erectus was the first hominin to disperse widely beyond Africa, reaching:
The Caucasus (Dmanisi) by ~1.8 Ma
Southeast Asia (Java) by ~1.6 Ma
China (Zhoukoudian) by ~0.8–0.7 Ma
This makes H. erectus not a single, static species, but a long-lived, geographically structured lineage.
Homo erectus as a population complex
Rather than a uniform species, H. erectus is best understood as a metapopulation:
African populations
Western Eurasian populations
East and Southeast Asian populations
These groups experienced repeated range expansions, isolation, local adaptation, and partial gene flow, producing the mosaic anatomy seen in fossils such as DAN5.
This population structure is critical for understanding later human evolution.
Relationship to later Homo species
Neanderthal (H. neanderthalensis)
From H. erectus to H. heidelbergensis
By around 700–600 thousand years ago, some H. erectus-derived populations—probably in Africa—had evolved into forms often grouped as Homo heidelbergensis (or H. rhodesiensis for African material).
These hominins had:
Larger brains (1,100–1,300 cm³)
Reduced facial prognathism
Continued Acheulean and early Middle Stone Age technologies
They represent a transitional grade, not a sharp speciation event.
Divergence of Neanderthals, Denisovans, and modern humans
Genetic and fossil evidence indicates the following broad pattern:
~550–600 ka: A heidelbergensis-like population splits
African branch → modern Homo sapiens
Eurasian branch → Neanderthals and Denisovans
Neanderthals
Evolved primarily in western Eurasia
Adapted to cold climates
Distinctive cranial morphology
Contributed ~1–2% of DNA to all non-African modern humans
Denisovans
Known mostly from genetic data, with sparse fossils (Denisova Cave)
Closely related to Neanderthals but genetically distinct
Contributed genes to Melanesians, Aboriginal Australians, and parts of East and Southeast Asia, including variants affecting altitude adaptation (e.g. EPAS1)
Modern Homo sapiens
Emerged in Africa by ~300 ka
Retained genetic continuity with earlier African populations
Dispersed out of Africa multiple times, beginning ~70–60 ka
Interbred repeatedly with Neanderthals and Denisovans
The key point: no clean branching tree
Human evolution is reticulate, not linear:
Species boundaries were porous
Gene flow occurred repeatedly
Populations diverged, adapted, re-merged, and diverged again
Homo erectus is not a side branch that “went extinct”, but a foundational grade from which multiple later lineages emerged. DAN5 fits neatly into this framework: a locally isolated erectus population retaining ancestral traits while others continued evolving elsewhere.
Why this matters
This picture:
Explains mosaic anatomy in fossils
Accounts for genetic admixture in living humans
Makes sense of long timescales and geographic diversity
Is incompatible with any model of recent, perfect, single-pair creation
Instead, it shows that our species is the outcome of millions of years of population dynamics, not a single moment of design.
A new fossil face sheds light on early migrations of ancient human ancestorA New Fossil Face Sheds Light on Early Migrations of Ancient Human Ancestor
A 1.5-million-year-old fossil from Gona, Ethiopia reveals new details about the first hominin species to disperse from Africa.
Summary: Virtual reassembly of teeth and fossil bone fragments reveals a beautifully preserved face of a 1.5-million-year-old human ancestor—the first complete Early Pleistocene hominin cranium from the Horn of Africa. This fossil, from Gona, Ethiopia, hints at a surprisingly archaic face in the earliest human ancestors to migrate out of Africa.
A team of international scientists, led by Dr. Karen Baab, a paleoanthropologist at the College of Graduate Studies, Glendale Campus of Midwestern University in Arizona, produced a virtual reconstruction of the face of early Homo erectus. The 1.5 to 1.6 million-year-old fossil, called DAN5, was found at the site of Gona, in the Afar region of Ethiopia. This surprisingly archaic face yields new insights into the first species to spread across Africa and Eurasia. The team’s findings are being published in Nature Communications.
We already knew that the DAN5 fossil had a small brain, but this new reconstruction shows that the face is also more primitive than classic African Homo erectus of the same antiquity. One explanation is that the Gona population retained the anatomy of the population that originally migrated out of Africa approximately 300,000 years earlier.
Dr. Karen L. Baab, lead author
Department of Anatomy
Midwestern University
Glendale, AZ, USA.
Gona, Ethiopia
The Gona Paleoanthropological Research Project in the Afar of Ethiopia is co-directed by Dr. Sileshi Semaw (Centro Nacional de Investigación sobre la Evolución Humana, Spain) and Dr. Michael Rogers (Southern Connecticut State University). Gona has yielded hominin fossils that are older than 6.3 million years ago, and stone tools spanning the last 2.6 million years of human evolution. The newly presented hominin reconstruction includes a fossil brain case (previously described in 2020) and smaller fragments of the face belonging to a single individual called DAN5 dated to between 1.6 and 1.5 million years ago. The face fragments (and teeth) have now been reassembled using virtual techniques to generate the most complete skull of a fossil human from the Horn of Africa in this time period. The DAN5 fossil is assigned to Homo erectus, a long-lived species found throughout Africa, Asia, and Europe after approximately 1.8 million years ago.
How did the scientists reconstruct the DAN5 fossil?
The researchers used high-resolution micro-CT scans of the four major fragments of the face, which were recovered during the 2000 fieldwork at Gona. 3D models of the fragments were generated from the CT scans. The face fragments were then re-pieced together on a computer screen, and the teeth were fit into the upper jaw where possible. The final step was “attaching” the face to the braincase to produce a mostly complete cranium. This reconstruction took about a year and went through several iterations before arriving at the final version.
Dr. Baab, who was responsible for the reconstruction, described this as “a very complicated 3D puzzle, and one where you do not know the exact outcome in advance. Fortunately, we do know how faces fit together in general, so we were not starting from scratch.”
What did scientists conclude?
This new study shows that the Gona population 1.5 million years ago had a mix of typical Homo erectus characters concentrated in its braincase, but more ancestral features of the face and teeth normally only seen in earlier species. For example, the bridge of the nose is quite flat, and the molars are large. Scientists determined this by comparing the size and shape of the DAN5 face and teeth with other fossils of the same geological age, as well as older and younger ones. A similar combination of traits was documented previously in Eurasia, but this is the first fossil to show this combination of traits inside Africa, challenging the idea that Homo erectus evolved outside of the continent.
I'll never forget the shock I felt when Dr. Baab first showed me the reconstructed face and jaw. The oldest fossils belonging to Homo erectus are from Africa, and the new fossil reconstruction shows that transitional fossils also existed there, so it makes sense that this species emerged on the African continent,” says Dr. Baab. “But the DAN5 fossil postdates the initial exit from Africa, so other interpretations are possible.
Dr. Yousuke Kaifu, co-author
The University Museum
The University of Tokyo
Bunkyo-ku, Tokyo, Japa.
This newly reconstructed cranium further emphasizes the anatomical diversity seen in early members of our genus, which is only likely to increase with future discoveries.
Dr. Michael J. Rogers, co-author.
Department of Anthropology
Southern Connecticut State University
New Haven, CT, USA.
It is remarkable that the DAN5 Homo erectus was making both simple Oldowan stone tools and early Acheulian handaxes, among the earliest evidence for the two stone tool traditions to be found directly associated with a hominin fossil.
Dr. Sileshi Semaw, co-author
Centro Nacional de Investigación sobre la Evolución Humana (CENIEH)
Burgos, Spain.
Future Research
The researchers are hoping to compare this fossil to the earliest human fossils from Europe, including fossils assigned to Homo erectus but also a distinct species, Homo antecessor, both dated to approximately one million years ago.
Comparing DAN5 to these fossils will not only deepen our understanding of facial variability within Homo erectus but also shed light on how the species adapted and evolved.
Dr. Sarah E. Freidline, co-author
Department of Anthropology
University of Central Florida
Orlando, FL, USA.
There is also potential to test alternative evolutionary scenarios, such as genetic admixture between two species, as seen in later human evolution among Neanderthals, modern humans and “Denisovans.” For example, maybe DAN5 represents the result of admixture between classic African Homo erectus and the earlier Homo habilis species.
We’re going to need several more fossils dated between one to two million years ago to sort this out.
Abstract
The African Early Pleistocene is a time of evolutionary change and techno-behavioral innovation in human prehistory that sees the advent of our own genus, Homo, from earlier australopithecine ancestors by 2.8-2.3 million years ago. This was followed by the origin and dispersal of Homo erectus sensu lato across Africa and Eurasia between ~ 2.0 and 1.1 Ma and the emergence of both large-brained (e.g., Bodo, Kabwe) and small-brained (e.g., H. naledi) lineages in the Middle Pleistocene of Africa. Here we present a newly reconstructed face of the DAN5/P1 cranium from Gona, Ethiopia (1.6-1.5 Ma) that, in conjunction with the cranial vault, is a mostly complete Early Pleistocene Homo cranium from the Horn of Africa. Morphometric analyses demonstrate a combination of H. erectus-like cranial traits and basal Homo-like facial and dental features combined with a small brain size in DAN5/P1. The presence of such a morphological mosaic contemporaneous with or postdating the emergence of the indisputable H. erectus craniodental complex around 1.6 Ma implies an intricate evolutionary transition from early Homo to H. erectus. This finding also supports a long persistence of small-brained, plesiomorphic Homo group(s) alongside other Homo groups that experienced continued encephalization through the Early to Middle Pleistocene of Africa.
Introduction
The oldest fossils assigned to our genus are ~2.8 million years old (Myr) from Ethiopia and signal a long history of Homo evolution in the Rift Valley1,2,3. There is evidence of multiple Homo lineages in Africa by 2.0–1.9 million years ago (Ma) and an archaeological and paleontological record of expansion to more temperate habitats in the Caucasus and Asia between 2.0 and 1.8 Ma4 (Fig. 1). The last appearance datum for the more archaicHomo habilis species (or “1813 group”) is ~1.67 (OH 13) or ~1.44 Ma, if KNM-ER 42703 is correctly attributed to H. habilis5, which is uncertain6. The archetypal early African Homo erectus fossils from Kenya (i.e., KNM-ER 3733, 3883; and the adolescent KNM-WT 15000) already present a suite of traits that distinguish them from early Homo taxa by 1.6–1.5 Ma, including larger brains and bodies, smaller postcanine dentition, more pronounced cranial superstructures (e.g., projecting and tall brow ridges), a relatively wide midface and nasal aperture, deep palate, and projecting nasal bridge1,6,7,8,9,10,11. The only evidence for H. erectus sensu lato in Africa before 1.8 Ma are fragmentary or juvenile fossils12,13,14, while fossils expressing both ancestral H. habilis and more derived H. erectus s.l. morphological traits are only known from Dmanisi, Georgia at 1.77 Ma15,16. Thus, H. erectus emerged from basal Homo between 2.0 and 1.6 million years ago, but when, where (Africa or Eurasia), and how it occurred remain unclear. An expanded fossil record also documents significant variation in endocranial 17,18 and craniofacial6,8 and dentognathic morphology19,20 throughout the Early Pleistocene, which extends to the Middle Pleistocene with the addition of small-brained Homo lineages to the human tree.
Fig. 1: Early Homo and Homo erectus timeline between 2.0 and 1.0 Ma and map of key sites in Africa and southern Eurasia.
The solid bars of the timeline indicate well-established first and last appearance data; the horizontal stripes indicate possible extensions of the time range based on fragmentary or juvenile fossils. Diagonal lines signal earlier archaeological presence in those regions. The question mark indicates a possible date of <1.49 Ma for the Mojokerto, Indonesia site cf.22,23,24,25. The horizontal gray bar represents the time range associated with DAN5/P1. Colors on the map indicate presence of fossils matching taxa or geographic groups of H. erectus as indicated in the timeline. Surface renderings of the best-preserved regional representatives of archaic or small-brained Homo fossils (beginning at top and continuing clockwise): D2700, KNM-ER 1813, KNM-ER 1470, KNM-ER 3733, SK 847, OH 24, KNM-WT 15000, and DAN5/P1. All surface renderings visualized at FOV 0° (parallel). Map was generated in “rnaturalearth” package68 for R.
The initial announcement of DAN5/P1 assigned it to H. erectus on the basis of derived neurocranial traits21. Subsequent analyses of neurocranial shape and endocranial morphology confirmed affinity with H. erectus but also noted similarities to early (pre-erectus) Homo fossils such as KNM-ER 181317,18. Only limited information about the partial maxilla and dentition was presented in the original description21. Yet, facial and dental traits are increasingly important in early Homo systematics, given overlap in brain size among closely related hominins6,8,22. The DAN5/P1 fossil is a rare opportunity to evaluate neurocranial, facial, and dental anatomy in a single Early Pleistocene Homo fossil and thus has significant implications for this discussion.
Here we present a new cranial reconstruction of the 1.6–1.5 Myr DAN5/P1 fossil from Gona, Ethiopia. This study demonstrates that the small-brained adult DAN5/P1 fossil (598 cm321) presents a previously undocumented combination of early Homo and H. erectus features in an African fossil.
Taken together, the evidence leaves little room for the idea that Homo erectus was a dead-end curiosity, neatly replaced by something entirely new. Instead, it represents a long-lived, widely dispersed, and internally diverse population complex that provided the evolutionary substrate from which later human lineages emerged. Its descendants were not produced by sudden leaps or special creation events, but by the ordinary, observable processes of population divergence, isolation, and adaptation acting over deep time.
Modern Homo sapiens, Neanderthals, and Denisovans did not arise as separate “kinds”, nor did they follow clean, branching paths. They represent regional outcomes of this erectus-derived heritage, shaped by geography, climate, and repeated episodes of contact and interbreeding. The genetic legacy of those interactions is still present in living humans today, providing independent confirmation of what the fossil record has long been indicating.
What emerges is not a ladder of progress but a dynamic, reticulated history: populations spreading, fragmenting, evolving in isolation, and reconnecting again. Fossils such as DAN5 are not anomalies to be explained away; they are exactly what we should expect from evolution operating on structured populations across continents and hundreds of thousands of years.
For creationism, this is deeply inconvenient. For evolutionary biology, it is precisely the kind of rich, internally consistent picture that arises when multiple independent lines of evidence converge on the same conclusion: humanity is the product of a long, complex evolutionary history, not a recent act of design.
In another major embarrassment for those creationists who understand it, researchers at the Gladstone Institutes and Stanford University have developed a method for linking the genome of a cell to diseases caused by specific gene variants. They have recently published their findings, open access, in Nature.
Creationists insist that the human genome was intelligently designed, with every outcome the result of “complex specified information” which, according to Discovery Institute Fellow William A. Dembski, constitutes definitive evidence of intelligent design. If this were true, it would follow that genes which cause disease were intelligently designed to cause those diseases.
The difficulty deepens for creationists when one considers that many diseases involve multiple genes, sometimes hundreds or even thousands, all of which must possess the “correct” variants for the disease to emerge. In other words, some diseases not only depend on Dembski’s “complex specified genetic information”, but also conform to Michael J. Behe’s proposed hallmark of intelligent design: irreducible complexity.
Unless creationists invoke an additional creator—one over whom their reputedly omnipotent and omniscient god has no control—their supposedly intelligent designer must have deliberately created these gene variants to produce the suffering they cause.
By contrast, the evolutionary explanation requires no such mental gymnastics. The existence of genetic variants is exactly what evolutionary theory predicts, and provided such variants remain rare within a population, there is little selective pressure to remove them. A genome produced by an omniscient, perfect designer, however, would contain no such variants: the original design would be flawless, as would the mechanisms responsible for replicating it. The very existence of gene variants is therefore evidence against intelligent design.
The technique developed by the research team is sensitive enough to examine the entire genome and determine which genes influence which cell types. This makes it possible to identify which genes contribute to particular diseases. In cases where a single gene is involved, this can be relatively straightforward, but where many genes are implicated, it can be extremely difficult to disentangle their individual effects—precisely the problem this new technique helps to overcome.
Having recently watched a grey squirrel carefully plot a route through a line of trees, I was struck by the sophistication of its behaviour. It was not simply moving at random. It clearly knew where it wanted to go and was able to take into account such factors as how much slender branches would bend under its weight, how wide a gap it could safely jump, and—perhaps most importantly—exactly where it was within its own mental map of the environment. It is difficult to see how such behaviour could be possible in a creature that was not conscious and, to some degree, self-aware.
In animal psychology, there is now little doubt that many vertebrates possess some level of self-awareness and therefore consciousness. The remaining debate has centred not on whether consciousness exists in non-human animals, but on how it arose. The fact that consciousness is found across a wide range of vertebrates, and even in molluscs such as cephalopods, suggests either that it originated in a remote common ancestor or that it evolved independently multiple times through convergence. Either way, this strongly points to an evolutionary origin.
According to two papers published in a special edition of the journal Philosophical Transactions of the Royal Society B, by working groups led by Professors Albert Newen and Onur Güntürkün at Ruhr University Bochum in Germany, consciousness can indeed be explained as the outcome of an evolutionary process, with each step conferring a selective advantage. Moreover, consciousness only makes sense as an evolved biological function. The two open-access papers can be found here and here.
This work is bound to provoke another bout of denialism among creationists, for whom consciousness remains one of the standard “impossible to explain without supernatural intelligence” fallback arguments. As with abiogenesis and the Big Bang, the reasoning typically amounts to: “Science hasn’t explained it and I don’t understand how it could, therefore God did it.” This false dichotomy conveniently removes any obligation to provide evidence in support of the supernatural claim. Creationists also like to flatter themselves that consciousness is a uniquely human trait and thus evidence of special creation. In scientific terms, however, this does not even rise to the level of a hypothesis: it proposes no mechanism, makes no testable predictions, and is unfalsifiable by design. It is, in essence, wishful thinking rooted in the belief that the Universe is obliged to conform to personal expectations.
By contrast, the Ruhr University team have identified three distinct levels of consciousness and demonstrated the evolutionary advantage of each, drawing on detailed studies of birds that show parallel forms of consciousness to those seen in humans. These levels are:
Basic arousal — such as the perception of pain, which signals that harm is occurring and that corrective action is required.
General alertness — awareness of the broader environment, allowing threats and opportunities to be recognised and responded to appropriately.
Reflexive (self-)consciousness — the ability to place oneself within an environment, learn from past experience, anticipate future outcomes, and formulate an action plan; in other words, to construct a narrative with oneself as a participant.
Species of Balanophora are parasitic plants that live underground and emerge above ground only during the flowering season — and some species even reproduce exclusively asexually. This collage shows species studied to establish how the plants of that group relate to each other, how they modified their plastids and how their reproduction fits into their ecology.
A recently published paper in New Phytologist on the biology of the parasitic plants *Balanophora*, by three botanists from the Okinawa Institute of Science and Technology, Japan, together with Kenji Suetsugu of Kobe University, should cause consternation in creationist circles — if only they were not so practised at dismissing any evidence that contradicts their superstition.
Not only does the study highlight the well-known problem of parasitism, which creationists typically attempt to wave aside by invoking “The Fall” — thereby exposing any claim that creationism is a genuine science rather than a form of Christian fundamentalism as a lie — it also reveals that the evolution of this group of plants has involved a loss of complexity, coupled with the repurposing of redundant structures. The result is what creationists themselves would describe as irreducible complexity, accompanied by precisely the kind of “complex specified genetic information” that William A. Dembski insists should be regarded as evidence for intelligent design.
Then there is the problem of an overly complex solution, in that, instead of simply giving the plants the genes they need, some essential genes have been included in cell organelles These are clearly repurposed chloroplasts that no longer perform photosynthesis, produced by an evolutionary process that creationists deny - leaving them to explain why an intelligent designer opted for such an overly complex solution.
Finally, the findings rely entirely on the Theory of Evolution to explain and make sense of the observations, with no hint of any need to invoke the supernatural magic upon which creationism depends — despite repeated assurances from creationist cult leaders to their followers that such a moment is imminent, a promise they have been making for over half a century.
Dugongs and manatees — the surviving members of the order Sirenia — are among the most revealing mammals when it comes to understanding evolution. Fully aquatic yet air-breathing, specialised yet constrained by their ancestry, they provide one of the clearest examples of how complex organisms arise through gradual modification rather than sudden creation.
Unlike whales, which are now well known as a textbook evolutionary transition, sirenians are less familiar to the public. That makes them especially valuable, because their fossil record is remarkably complete, their evolutionary trajectory is straightforward, and their genetic relationships were discovered independently of their anatomy. Taken together, they present a problem for creationism that cannot be explained away.
Terrestrial origins.
The earliest known sirenians lived around 50 million years ago and were unmistakably terrestrial or semi-aquatic mammals.
An artistic reconstruction of a herd of ancient sea cows foraging on the seafloor
Alex Boersma
Fossils of Salwasiren qatarensis, a newly described 21-million-year-old ancient sea cow species found in Al Maszhabiya [AL mahz-HA-bee-yah], a fossil site in southwestern Qatar.
Scientists from the Smithsonian’s National Museum of Natural History, together with collaborators at Qatar Museums, have just announced the discovery of 20-million-year-old fossils of a sea cow that was a miniature version of living dugongs, and which almost certainly lived in the same seagrass meadows as modern dugongs.
If there is one thing that has creationists scraping the bottom of their barrel for reasons to dismiss evidence, it is news of fossils that are tens of millions of years older than they believe the universe is — simply because Bronze Age authors of their favourite source book, the Bible, said so.
In their determination to show the world that nothing can make them change their belief in the demonstrably absurd, creationists will resort to false accusations of lying against scientists, claim they are incompetent, or insist that they used dating methods they claim (incorrectly) to have been proven false, all in an attempt to preserve their beliefs. It is as though they imagine the entire global scientific community, and all the research institutions within it, exist solely to disprove the Bible in order to make creationists change their minds.
For rational people without such an egocentric view of the world, however, discoveries such as these miniature dugongs help to paint a fascinating picture of how species — and the ecosystems of which they are a part — have evolved over time. The fossils were found about 10 miles from a bay of seagrass that is prime habitat for modern dugongs.
As I’ve pointed out many times, 99.9975% of Earth’s history took place before the period in which creationists—treating the Bible as literal historical truth—believe the planet itself existed. It is remarkable how effectively biblical literalists manage to ignore, distort, or otherwise dismiss almost the entire body of geological, archaeological, and palaeontological evidence in order to cling to the easily refuted notion of a 6,000–10,000-year-old Earth and a global genocidal flood supposedly occurring about 4,000 years ago.
Unsurprisingly, discoveries such as the one below make no impression whatsoever on committed creationists.
Now archaeologists from Aarhus University, working with colleagues from the National Museum of Denmark as well as teams from Germany, Sweden, and France, have uncovered yet another piece of evidence destined for creationist dismissal: blue pigment on a stone artefact dating from around 13,000 years ago. Their findings were recently published in Antiquity.
Not only should this archaeology not exist at all if the biblical timeline were correct, but even if it had somehow escaped the supposed global flood, it would necessarily be buried beneath a thick, worldwide layer of sediment containing a chaotic mixture of fossil plants and animals from disconnected continents. No such layer has ever been found anywhere on Earth. A truly global flood, as described in Genesis, would have left unmistakable and ubiquitous geological signatures. It did not.
The blue pigment was discovered on a shaped, concave stone originally thought to be an oil lamp but now believed to have served as a mixing palette. Until now, only black and red pigments had been identified on Palaeolithic artefacts, leading archaeologists to assume these were the only colours available. The presence of blue pigment suggests something more nuanced: selective use of colours for different purposes, with blue likely used primarily for body decoration or dyeing clothing—activities that rarely leave direct archaeological traces.
You might not realise it, but, if you just played that audio file, according to researchers at the Université de Genève, Switzerland, a region of your brain - the auditory cortex - just 'lit up'.
This region is responsible for voice recognition, and it responds not only to human voices but also to the calls of common chimpanzees (Pan troglodytes). Notably, the same response is not seen with the calls of bonobos (Pan paniscus) or rhesus macaques (Macaca mulatta). Their findings have been published open access in eLife.
This discovery presents creationists with yet another problem to be ignored, misrepresented or lied about.
Using William A. Dembski’s so-called “proof of intelligent design” — complex specified genetic information, widely cited by creationists as evidence for design and against evolution — we are entitled to ask an obvious question. Why would an intelligent designer create genetic information for a supposedly “too complex to have evolved by random chance” region of the human brain that responds selectively to chimpanzee calls?
What, precisely, was this ability designed for?
By contrast, the evolutionary explanation is straightforward. If humans and chimpanzees share a relatively recent common ancestor, we would expect some neural processing traits to be retained, particularly where there has been no strong selection pressure to eliminate them.
The finding does, however, raise an interesting secondary question: why do we not respond in the same way to bonobo calls?
The answer is likely to come from evolutionary biology. Chimpanzees and bonobos diverged fairly recently, and there may have been a selective advantage for bonobo calls not to be recognised by chimpanzees. Chimpanzees are known to kill and eat bonobos if given the opportunity, so selection may have favoured divergence in vocal signals — with the consequence that humans also lost sensitivity to bonobo calls.
Once again, we encounter a feature of nature that is difficult to reconcile with the notion of an intelligent designer, yet entirely consistent with evolutionary processes acting on shared ancestry, divergence, and selection pressures.
Scientifically, the work is also of considerable interest, as it may shed light on how human speech recognition and language development arise in children. For the creationist, however, it is merely one more inconvenient piece of evidence — to be filed under “not wanted — reject” or “evidence of a Satanic conspiracy — ignore”.
Scientists at Aarhus University, Denmark, have discovered that barley can be induced to form a symbiotic relationship with nitrogen-fixing bacteria through a simple substitution of two amino acids in a single protein. This tweak enables barley to initiate the same sort of symbiosis that legumes use to “self-fertilise”. They have published their findings in Proceedings of the National Academy of Sciences of the USA.
This is yet another case where we can legitimately ask: if scientists can do it, why didn’t creationism’s supposed intelligent designer do it, if its intent were truly to create a world optimised for human existence? The question remains unanswered, often provoking threats and hysteria on social media, as creationists scramble to cover their confusion with guesses rooted in Christian fundamentalism and Biblical tales of “The Fall”. It’s a core theological patch, while the forlorn Discovery Institute and its fellows remain as silent on this issue as they are on parasites and pathogens—still struggling to sustain the pretence that ID creationism is real science rather than Bible-literalist creationism dressed in a grubby lab coat.
The Aarhus researchers found that a highly conserved protein, present across plant species, plays a crucial role in plant–microbe interactions—presumably as part of the plant’s defence against pathogens. However, in legumes the same protein must be suppressed, because its normal activity prevents formation of the root nodules that act as low-oxygen refuges for the nitrogen-fixing bacteria on which legumes depend. A simple mutation in this protein allows nodule formation in barley, enabling the crop to produce its own nitrogen fertiliser, increasing yields without the expense of artificial fertilisers and without the ecological harm they cause when they leach into waterways.
If we take creationist claims about the human body at face value – that we are the special design of an omniscient, omnipotent creator god – we would have to conclude that this putative god equipped us for life in small, dispersed bands of hunter-gatherers, entirely free from the pressures of modern urban existence. That is the inescapable implication of new work by Daniel P. Longman of the School of Sport, Exercise and Health Sciences, Loughborough University, UK, and Colin N. Shaw of the Department of Evolutionary Anthropology, University of Zürich, Switzerland.
In their study, recently published in Biological Reviews, they argue that human evolutionary fitness has deteriorated markedly over the past 300 years, beginning with the Industrial Revolution. They attribute this to the escalating stresses of urban life, which are increasingly linked to counter-survival problems such as declining fertility rates and the rising prevalence of chronic inflammatory conditions, including autoimmune diseases. They also highlight impaired cognitive function in urban settings, with chronic stress playing a central role in many of these conditions.
As they note, our stress responses were shaped in environments where predators such as lions posed intermittent but existential threats. A sudden burst of adrenaline and cortisol – the classic fight-or-flight reaction – made the difference between survival and being eaten. Today, however, we summon exactly the same physiological response to traffic noise, difficult conversations with colleagues or family, and that irritatingly arrogant but ignorant creationist on the Internet. Where a lion encounter would once have been an occasional shock, we now experience the physiological equivalent of facing several lions a day.
For creationists, this poses an awkward problem. An omniscient designer should have foreseen humanity’s future circumstances and endowed us with a physiology robust enough to cope with them. Evolution, by contrast, cannot predict even the next generation, let alone the demands of life tens or hundreds of millennia later. It optimised our ancestors for survival on open African landscapes, not for navigating congested cities, chronic noise, 24-hour information streams, and the relentless stimuli of modern technology. This helps explain why our inherited design is increasingly mismatched to our environment, and why evolution cannot adjust us quickly enough to keep pace.
My own family history illustrates this accelerating mismatch. My grandparents grew up in rural Oxfordshire, before the arrival of the motor car, electricity, modern sanitation, or powered heating. Their lives were essentially unchanged from those of their parents and grandparents. My parents, by contrast, had electricity, piped water, proper sanitation, and radio; later a motor car, a television, and eventually a telephone. Now we have smartphones, laptops, air travel, satnavs, and city centres jammed with traffic. We spend hours each day staring at screens, communicating instantly across the world. My grandparents’ lives would have been recognisable to their great-grandparents, but mine would be unrecognisable to them – such has been the accelerating pace of technological change. No evolutionary process could possibly adapt a species to that speed of environmental transformation.
We are, in effect, experiencing stress levels akin to those of ancestors living among a pride of lions, not merely encountering one on rare occasions. And crucially, we have little or no time to recover before the next ‘lion’ appears.
Researchers at Cedars-Sinai have developed a synthetic RNA molecule that can help repair DNA in damaged tissues such as the myocardium following a myocardial infarction (MI). Their research is the basis for a paper just published in Science Translational Medicine
This raises a troubling question for ID creationists: if scientists can do it, why couldn’t their putative omniscient, omnipotent and, above all, omnibenevolent designer god do it? There are only a limited number of possibilities if we grant the ID proponents their designer god for the sake of argument:
It lacks the ability — in other words, it isn’t omnipotent.
It didn’t know it would be needed — in other words, it isn’t omniscient.
It doesn’t care — in other words, it isn’t omnibenevolent.
It doesn’t want us to repair damaged DNA so we continue to suffer the consequences — in other words, it is malevolent.
This is, of course, just another example of science discovering something that any intelligent, benevolent designer would have anticipated and provided, if such an entity had really designed us.
So, apart from those explanations — none of which flatter their putative designer — the only option left to creationists is that the absence of this DNA repair mechanism is the result of an unintelligent natural process in which their supposed designer played no part, such as evolution.
Unfortunately for them, creationists would have to abandon creationism and admit to being wrong if they accepted the naturalistic explanation. Sadly for them, creationists don’t see admitting being wrong as the intellectually honest thing to do, but as a sign of weakness and giving in to scientists and the physical evidence all ganging up to test their resolve.
This should trouble any creationist who understands the implications, so their cult leaders need to work hard to ensure none of their followers know about these things or develop the intellectual sophistication to appreciate the consequences.
Research led by the University of Bristol and published in the journal Nature a few days ago suggests that the transition from simple prokaryote cells to complex eukaryote cells began almost 2.9 billion years ago – nearly a billion years earlier than some previous estimates. Prokaryotes — bacteria and archaea — had been the dominant, indeed the only, life forms for the preceding 1.1 billion years, having arisen about 300 million years after Earth coalesced 4 billion years ago.
Creationists commonly forget that for the first billion or more years of life on Earth, it consisted solely of single-celled prokaryotes — bacteria and archaea. They routinely post nonsense on social media about the supposed impossibility of a complex cell spontaneously assembling from ‘non-living’ atoms — something no serious evolutionary biologist has ever proposed as an explanation for the origin of eukaryote cells.
There is now little doubt among biologists that complex eukaryote cells arose through endosymbiotic relationships between archaea and bacteria, which may have begun as parasitic or predator–prey interactions before evolving into symbioses as the endpoint of evolutionary arms races. The only questions concern when exactly eukaryote cells first began to emerge, and what triggered their evolution.
The team collected sequence data from hundreds of species and, combined with fossil evidence, reconstructed a time-resolved tree of life. They then used this framework to resolve the timing of historical events across hundreds of gene families, focusing on those that distinguish prokaryotes from eukaryotes.
One surprising finding was that mitochondria were late to the party, arising only as atmospheric oxygen levels increased for the first time — linking early evolutionary biology to Earth’s geochemical history.
The venomous stinger of an Asian giant hornet (Vespa mandarinia). The venom injected by this stinger can cause sharp, intense pain as well as local tissue damage and systemic effects such as destruction of red blood cells and cardiac dysfunction, which may even be fatal.
Yes. As I’ve observed myself, the common pond frog eats wasps apparently with impunity. I once watched a frog in our garden pond consume three wasps within a few minutes as they came down to drink. These frogs have, of course, evolved in the presence of wasps.
Now, according to research by Shinji Sugiura at Kobe University, Japan, published today, open access, in the journal Ecosphere, frogs that have evolved alongside an even more dangerous member of the wasp family – the Asian giant hornet – have also evolved resistance to venom that is toxic, even lethal, to many other creatures.
Creationists, however, insist that evolution does not happen and that wasps, frogs, and hornets were all intelligently designed by a supernatural deity synonymous with the god of the Bible and Qur’an. This leaves us wondering why an allegedly omnipotent, omniscient, supremely intelligent designer would equip wasps and hornets with a sting to defend themselves against predators, only then to design predators with resistance to that sting.
Creationists normally ignore this question, of course. Even their stock excuse – 'The Fall' – cannot be applied here. Neither frog nor hornet is parasitic on the other, except in the trivial sense that any predator is a “parasite” on its prey. But in this case, the frog appears to be the beneficiary: it gains a meal at no cost, while the wasp or hornet loses its life. And it is difficult to imagine that the genes conferring this immunity do *not
fall within William A. Dembski’s definition of “complex, specified information”. If they do not, then nothing producing a beneficial outcome can be so classified, and his argument for the existence of an intelligent designer collapses.
As the outcome of an evolutionary arms race, both the sting and the resistance in frogs make perfect sense—no need to invoke some forgetful designer who cannot recall what it supposedly created yesterday and treats it as a problem to be solved today.
In the case of these frogs, there may even be two distinct forms of immunity: resistance to pain and resistance to toxicity. It is already known that some hymenopterans deliver an excruciating sting with low toxicity, while others deliver a highly toxic sting with little or no pain.
Scientists at the Institute of Science and Technology, Austria, have found that terminally ill pupae in an ant colony emit a chemical signal that prompts worker ants to disinfect them with formic acid — a process that also brings about their death. This behaviour helps keep the colony free from infection and represents a clear example of evolved altruism with a genetic basis. Their findings are reported, open access, in Nature Communications.
One of the criticisms often levelled at evolutionary biology is that it cannot explain altruism, since individuals that sacrifice themselves for others seemingly shouldn’t survive to pass on any genes responsible for such behaviour.
This is plainly untrue. Acts of altruism are widespread in nature: male spiders and mantises are consumed by their mates, providing nutrients for developing eggs; the offspring of social spiders consume their mother, then go on to consume one another. These behaviours persist because they enhance the success of the genes involved.
The key lies in what Richard Dawkins termed the selfish gene. Contrary to creationist misrepresentations, this is not a claim that there exists a gene for selfishness. It refers instead to the way genes appear to act in their own interests. Genes promoting altruistic behaviour benefit when that behaviour increases the reproductive success of individuals carrying the same genes — typically close relatives. The sacrifice of one carrier can thereby enhance the spread of the genes responsible for the altruism.
In humans, altruism arises not only from genetic evolution but also from memetic evolution — the inheritance and adaptation of ideas, norms, and cultural expectations. Human altruism rarely requires life-or-death sacrifice; it more often involves smaller acts such as sharing resources, giving up a seat on a bus, or letting another driver go first at a junction. The advantage, at both genetic and memetic levels, is that such behaviours help build societies where cooperation is reciprocated. Altruism is ultimately an investment in a more stable, supportive environment that may benefit the genes and memes of the individuals who contribute to it.
Researchers from Switzerland and Japan, led by Professor Yohei Yamauchi of Eidgenössische Technische Hochschule Zürich (ETH Zürich), have developed a microscopy technique that enables real-time, high-resolution observation of how a virus gains entry to a cell. Their findings are described in the Proceedings of the National Academy of Sciences of the USA (PNAS).
The process, in which a virus exploits the pathways cells normally use to take in larger molecules such as hormones, cholesterol, or iron, involves the active cooperation of the cell as it reaches out to engulf the viral particle. This mechanism is triggered by receptors on the cell surface, to which viruses bind while ‘surfing’ along the membrane, seeking regions rich in receptors to form a stable attachment.
In other words, creationists often portray this as an “irreducibly complex” system, supposedly dependent on all components being present from the outset, requiring what they call “complex specified information” in both virus and cell to produce the receptors and binding proteins. Discovery Institute fellows Michael J. Behe and William A. Dembski present this as evidence of intelligent design.
Their argument depends on a statistical sleight of hand: they treat the entire process as though it originated in a single event involving one cell and one virus, then calculate improbabilities for each step and multiply them together, producing a vanishingly small likelihood of the whole mechanism arising spontaneously. This ignores the fact that evolution operates in populations — often large ones — across long periods, where components accumulate gradually over generations, dramatically increasing the probability of multiple features emerging together in the same lineage.
It also overlooks the billions of years during which viruses and cells have co-evolved. As multicellular organisms evolved ever more sophisticated ways of receiving and responding to external signals and substances, viruses simultaneously improved their ability to exploit those mechanisms.
But to the scientifically illiterate target audience of the ID-creationism industry, evolution is imagined as a single event rather than a continuous process, leaving them oblivious to the misuse of probability and the underlying mathematical errors.
Creationists trying to use this argument for intelligent design usually respond to biologists pointing out the obvious fact that they just presented their putative god as some sort of celestial malevolence, by retreating into Bible literalism and religious fundamentalism and invoking mythical 'Fall', so betraying the claims of the Discovery Institute and its fellows that ID is real science, not bible-literalist creationism dressed in a lab coat, as a lie.
How influenza viruses enter our cellsFor the first time, researchers have observed live and in high resolution how influenza viruses infect living cells. This was possible thanks to a new microscopy technique, which could now help to develop antiviral therapies in a more targeted manner.
In brief
For the first time, a new high-resolution microscopy technique has allowed researchers to watch live as influenza viruses infect cells.
The international team led by ETH Zurich found that the cells actively promote virus uptake.
This technique could now help to develop antiviral therapies in a more targeted manner.
Fever, aching limbs and a runny nose – as winter returns, so too does the flu. The disease is triggered by influenza viruses, which enter our body through droplets and then infect cells.
Researchers from Switzerland and Japan have now investigated this virus in minute detail. Using a microscopy technique that they developed themselves, the scientists can zoom in on the surface of human cells in a Petri dish. For the first time, this has allowed them to observe live and in high resolution how influenza viruses enter a living cell.
Led by Yohei Yamauchi, Professor of Molecular Medicine at ETH Zurich, the researchers were surprised by one thing in particular: the cells are not passive, simply allowing themselves to be invaded by the influenza virus. Rather, they actively attempt to capture it.
“The infection of our body cells is like a dance between virus and cell.
Professor Yohei Yamauchi, corresponding author.
Molecular Medicine Laboratory
Institute of Pharmaceutical Sciences
Department of Chemistry and Applied Biosciences
Eidgenössische Technische Hochschule Zürich
Zürich, Switzerland.
Viruses surf on the cell surface
Of course, our cells gain no advantage from a viral infection or from actively participating in the process. The dynamic interplay takes place because the viruses commandeer an everyday cellular uptake mechanism that is essential for the cells. Specifically, this mechanism serves to channel vital substances, such as hormones, cholesterol or iron, into the cells.
Like these substances, influenza viruses must also attach to molecules on the cell surface. The dynamics are like surfing on the surface of the cell: the virus scans the surface, attaching to a molecule here or there, until it has found an ideal entry point – one where there are many such receptor molecules located close to one another, enabling efficient uptake into the cell.
Once the cell’s receptors detect that a virus has attached itself to the membrane, a depression or pocket forms at the location in question. This depression is shaped and stabilised by a special structural protein known as clathrin. As the pocket grows, it encloses the virus, leading to the formation of a vesicle. The cell transports this vesicle into its interior, where the vesicle coating dissolves and releases the virus.
Previous studies investigating this key process used other microscopy techniques, including electron microscopy. As these techniques entailed the destruction of the cells, they could only ever provide a snapshot. Another technique that is used – known as fluorescence microscopy – only allows low spatial resolution.
Combined techniques, including for other viruses
The new technique, which combines atomic force microscopy (AFM) and fluorescence microscopy, is known as virus-view dual confocal and AFM (ViViD-AFM). Thanks to this method, it is now possible to follow the detailed dynamics of the virus’s entry into the cell.
Video: Nicole Davidson / ETH Zurich.
Accordingly, the researchers have been able to show that the cell actively promotes virus uptake on various levels. In this way, the cell actively recruits the functionally important clathrin proteins to the point where the virus is located. The cell surface also actively captures the virus by bulging up at the point in question. These wavelike membrane movements become stronger if the virus moves away from the cell surface again.
The new technique therefore provides key insights when it comes to the development of antiviral drugs. For example, it is suitable for testing the efficacy of potential drugs in a cell culture in real time. The study authors emphasise that the technique could also be used to investigate the behaviour of other viruses or even vaccines.
Significance
Influenza A viruses (IAVs) continue to cause epidemics worldwide due to their high mutability. Nevertheless, the initial step of infection, viral uptake into cells, has been challenging to observe directly with conventional microscopy techniques. Here, we developed a hybrid imaging system combining atomic force microscopy and confocal microscopy with enhanced mechanical functionality and minimal invasiveness to directly visualize nanoscale dynamics of IAV and cell membranes during viral uptake into living cells. This system enables the analysis of IAV lateral diffusion resulting from IAV–membrane interactions and characteristic membrane morphological changes induced by IAV during endocytosis. Our approach offers a method to rapidly assess the impact of viral mutations on host cell entry, which is critical for understanding emerging IAV variants.
Abstract
Influenza A virus (IAV) entry into host cells begins with interactions between the viral envelope proteins hemagglutinin (HA)/neuraminidase (NA) and sialic acid moieties on the cell plasma membrane. These interactions drive IAV’s lateral diffusion along the cell membrane and trigger membrane morphological changes required for endocytosis. However, directly visualizing these dynamic processes, which are crucial for IAV entry, has been challenging using conventional microscopy techniques. In this study, we enabled live-cell observation of nanoscale morphological dynamics of IAV and the cell membrane by reducing the mechanical invasiveness of atomic force microscopy (AFM). A customised cantilever with less than half the spring constant of conventional cantilevers enabled virus-view AFM imaging that preserved IAV–membrane interactions. By combining virus-view AFM with confocal microscopy, we performed correlative morphological and fluorescence observations of IAV lateral diffusion and endocytosis in living cells. Variations in diffusion coefficients of single virions suggested heterogeneity in sialic acid density on the cell membrane. NA inhibition decreased diffusion coefficients, while reduced sialic acid density increased them. The timing of clathrin accumulation at virion binding sites coincided with a decrease in diffusion coefficients, a relationship that was maintained independent of NA activity or sialic acid density. As clathrin assembly progressed, ~100-nm-high membrane bulges emerged adjacent to the virus, culminating in the complete membrane envelopment of the virus at peak clathrin accumulation. Our virus-view AFM will deepen our understanding of various virus–cell interactions, facilitate the evaluation of drug effects and promote future translational research.
Influenza A virus (IAV) is an enveloped RNA virus with two key surface glycoproteins: hemagglutinin (HA) and neuraminidase (NA). The virus surface contains 300 to 400 HA and 40 to 50 NA molecules (1). IAV envelope proteins comprise at least 18 HA and 11 NA subtypes (2), which enable IAV to infect various host species including humans, birds, pigs, bats, and other animals (3). These envelope proteins play crucial roles in IAV infection of host cells.
They interact with sialic acids on cell surface glycolipids and glycoproteins (4) or with major histocompatibility complex class II (MHC class II) molecules (5–7). HA binds to sialic acids at the terminal ends of glycan chains on the cell surface. The HA–sialic acid interactions are inherently weak, with dissociation constants typically in the millimolar range (0.9 to 68.4 × 10−3 M) (8–10). However, multivalent binding of multiple HAs to sialic acids enables IAV to stably adhere to the cell membrane (11, 12). Meanwhile, NA catalyzes the cleavage of sialic acids (13), inhibiting stable adhesion of IAV to the cell membrane. Through these mechanisms, HA and NA effectively regulate the attachment and detachment of IAV to the cell membrane.
The competitive action between HA and NA allows IAV to diffuse laterally along the cell membrane surface topology (). This lateral diffusion represents a critical dynamic macroscopic phenomenon reflecting virus–membrane interactions. However, conventional microscopy techniques have struggled to detect IAV movement on the 10-nm-thick cell membrane, resulting in limited visualization success (15–18).
HA-NA-sialic acid interactions also trigger endocytosis involving morphological changes of the cell membrane. When diffusing IAV binds to functional receptors such as EGFR (19) and Cav1.2 (20) through sialic acids, it initiates the recruitment and assembly of the endocytic machinery including clathrin, actin, and dynamin. IAV utilizes multiple entry pathways including clathrin-mediated endocytosis (CME), macropinocytosis, and both clathrin-independent and dynamin-independent mechanisms (16, 21–23).
IAV primarily utilizes CME for cellular entry (16, 21). Previous imaging of membrane dynamics using atomic force microscopy (AFM) has revealed that in IAV-free CME, clathrin-coated membrane invaginations (pits) larger than 100 nm in diameter form (24, 25). This is accompanied by the emergence of actin-dependent membrane bulges that develop on one side of the pit and eventually lead to its closure. Although electron microscopy has provided morphological snapshots of pits during IAV internalization (26), the membrane dynamics during IAV internalization via CME have yet to be successfully visualized.
AFM enables mechanical imaging of sample morphology with nanometer-scale resolution (27, 28). Since the development of high-speed AFM in 2001 (29), this technique has contributed significantly to molecular dynamics analysis (30–36). Additionally, the advent of cell-imaging AFM in 2013 has enabled advances in membrane dynamics analysis (37, 38). The integration of cell-imaging AFM combined with confocal microscopy has provided unique capabilities for observing nanoscale membrane morphological changes in living cells (24, 25). Despite these advances, a major challenge persists: the mechanical interference of the cantilever with biological samples. Visualizing the dynamic processes of IAV lateral diffusion and internalization requires an innovative technology capable of simultaneously observing the nanoscale morphology of the 10-nm-thick cell membrane and the 100-nm spherical IAV interacting with cell surface sialic acid-bearing glycolipids and proteins. Given that multivalent IAV–membrane interaction forces are relatively weak, ranging from 10 to 25 pN (39), achieving low-invasive imaging capabilities is critical.
In this study, we address and overcome the challenge of mechanical interference by enhancing the low invasiveness of AFM through the use of a customised soft cantilever. In combination with confocal microscopy, low-invasive AFM enables simultaneous live-cell imaging of both morphology and fluorescence. The redesigned cantilever minimizes disruption of IAV–membrane interactions, allowing accurate observation of viral dynamics. Using this system, we investigated the lateral diffusion of single IAV particles under various conditions, including NA inhibition, reduced cell surface sialic acid density, and different viral subtypes. We also analyzed membrane morphological changes before and during IAV endocytosis. While fluorescently labeled IAV was primarily used, we also demonstrate our AFM’s capability to track unlabeled viruses. This virus-view dual confocal and AFM, called ViViD-AFM, enables correlative morphological and fluorescence imaging of IAV–membrane dynamics, providing nanoscopic insights into HA-NA-sialic acid interactions.
What ID advocates never seem to notice is that, in arguing that such mechanisms must have been deliberately engineered, they are attributing to their designer a system in which viruses are given exquisitely tailored tools for invading the very cells it supposedly created. If one insists that this is intentional design, then one must also accept that the designer crafted the molecular equivalent of lockpicks and battering rams, optimised for breaching living tissue. It is difficult to reconcile this with any notion of benevolence.
Indeed, by rejecting evolution as the explanation for viral entry, ID proponents corner themselves into an uncomfortable theological stance: their designer not only equipped viruses with the machinery to exploit cellular signalling, but also ensured that cells remained vulnerable to such exploitation. The result is an ecosystem in which suffering, disease, and death are not unfortunate consequences of natural processes but deliberate design choices.
This is, of course, why mainstream biology requires no such designer. Co-evolution naturally explains why cells have receptors essential for communication and nutrient uptake, while viruses have, over immense timescales, adapted to hijack those same pathways. No malevolent architect is required—only the simple, iterative logic of variation, selection, and replication.
Yet the ID movement persistently overlooks this simpler, evidence-based account, preferring instead an argument that—if taken seriously—presents their putative creator as either unable to prevent viral parasitism or fully complicit in engineering it. Neither option supports the benevolent, omnipotent designer they hope to defend.