Showing posts with label Creationism in Crisis. Show all posts
Showing posts with label Creationism in Crisis. Show all posts

Thursday, 20 November 2025

Creationism Refuted - Unlike Creationists Chimpanzees Change Their Mind When the Evidence Changes


Ngamba Island Chimpanzees, Uganda

Photo: Sabana Gonzalez, UC Berkeley
New psychology study suggests chimpanzees might be rational thinkers | Letters & Science

A recent study has shown that chimpanzees, unlike creationists, are capable of rationally revising their beliefs when presented with new information – another trait they share with most humans.

Creationists, by contrast, tend to take pride in refusing to change their minds. For them, admitting error would be a sign of weakness: a capitulation to the supposedly corrupting influence of scientific evidence that threatens to lure them away from the ‘truth’. In their circular logic, it must be true because they believe it, and they believe it because it is true - a circular logic designed to make intellectual bankruptcy look like a virtue called 'faith'.

Chimpanzees, unburdened by irrational superstition or egos in need of constant reinforcement, appear far more interested in being right than in demonstrating unwavering devotion to a demonstrably wrong belief system.

Interestingly, the chimpanzees can do something human children do by the age of about 4. The ability to asses evidence and base opinions on it, is, of course, the basis of science - which may be the reason creationists struggle to understand it and reject evidence as the basis of opinion, believing themselves to be capable of simply knowing the truth, like a child below the age of 4. So we have a continuum of increasing intellectual ability and integrity from toddlers and creationists through chimpanzees and 4-year-old humans to human adults. The study, carried out by a large research team that included UC Berkeley Psychology Postdoctoral Researcher Emily Sanford, UC Berkeley Psychology Professor Jan Engelmann, and Utrecht University Psychology Professor Hanna Schleihauf, has just been published in Science and is summarised in a University of California Berkeley news item.
New psychology study suggests chimpanzees might be rational thinkers
Chimpanzees may have more in common with human thinkers than previously thought. A new study published in Science by researchers provides evidence that chimpanzees can rationally revise their beliefs when presented with new information.
The study, titled “Chimpanzees rationally revise their beliefs,” was conducted by a large research team that included UC Berkeley Psychology Postdoctoral Researcher Emily Sanford, UC Berkeley Psychology Professor Jan Engelmann and Utrecht University Psychology Professor Hanna Schleihauf. Their findings showed that chimpanzees — like humans — can change their minds based on the strength of available evidence, a key feature of rational thought.

Working at the Ngamba Island Chimpanzee Sanctuary in Uganda, the researchers presented chimps with two boxes, one containing food. Initially, the animals received a clue suggesting which box held the reward. Later, they were given stronger evidence pointing to the other box. The chimps frequently switched their choices in response to the new clues.

Chimpanzees were able to revise their beliefs when better evidence became available. This kind of flexible reasoning is something we often associate with 4-year-old children. It was exciting to show that chimps can do this too.

Dr. Emily M. Sanford, co-lead author
Department of Psychology
University of California, Berkeley

To ensure the findings reflected genuine reasoning rather than instinct, the team incorporated tightly controlled experiments and computational modeling. These analyses ruled out simpler explanations, such as the chimps favoring the latest signal (recency bias) or reacting to the most obvious cue. The models confirmed that the chimps’ decision-making aligned with rational strategies of belief revision.

We recorded their first choice, then their second, and compared whether they revised their beliefs. We also used computational models to test how their choices matched up with various reasoning strategies.

Dr. Emily M. Sanford.

The study challenges the traditional view that rationality — the ability to form and revise beliefs based on evidence — is exclusive to humans.

The difference between humans and chimpanzees isn’t a categorical leap. It’s more like a continuum

Dr. Emily M. Sanford.

Sanford also sees broader applications for this research. Understanding how primates revise beliefs could reshape how scientists think about learning, child development and even artificial intelligence.

This research can help us think differently about how we approach early education or how we model reasoning in AI systems. We shouldn’t assume children are blank slates when they walk into a classroom.

Dr. Emily M. Sanford.

The next phase of her study brings the same tasks to children. Sanford’s team is currently collecting data from two- to four-year-olds to compare how toddlers and chimps revise beliefs.

It’s fascinating to design a task for chimps, and then try to adapt it for a toddler.

Dr. Emily M. Sanford.

Eventually, she hopes to extend the study to other primate species as well, building a comparative map of reasoning abilities across evolutionary branches. While Sanford has worked on everything from dog empathy to numerical cognition in children, one lesson remains constant: animals are capable of much more than we assume.

They may not know what science is, but they’re navigating complex environments with intelligent and adaptive strategies, and that’s something worth paying attention to.

Dr. Emily M. Sanford.

Other members of the research team include: Bill Thompson (UC Berkeley Psychology); Snow Zhang (UC Berkeley Philosophy); Joshua Rukundo (Ngamba Island Chimpanzee Sanctuary/Chimpanzee Trust, Uganda); Josep Call (School of Psychology and Neuroscience, University of St Andrews); and Esther Herrmann (School of Psychology, University of Portsmouth).

Publication:
Abstract

The selective revision of beliefs in light of new evidence has been considered one of the hallmarks of human-level rationality. However, tests of this ability in other species are lacking. We examined whether and how chimpanzees (Pan troglodytes) update their initial belief about the location of a reward in response to conflicting evidence. Chimpanzees responded to counterevidence in ways predicted by a formal model of rational belief revision: They remained committed to their initial belief when the evidence supporting the alternative belief was weaker, but they revised their initial belief when the supporting evidence was stronger. Results suggest that this pattern of belief revision was guided by the explicit representation and weighing of evidence. Taken together, these findings indicate that chimpanzees metacognitively evaluate conflicting pieces of evidence within a reflective process.


What this study ultimately highlights is a principle at the very heart of science: the willingness to change one’s mind when the evidence changes. Chimpanzees demonstrated this with ease, adjusting their expectations when presented with new information. Human children over the age of four typically do the same as their cognitive abilities develop.

Creationists, however, find themselves in a very different category. Their worldview demands that beliefs remain fixed regardless of contradictory evidence. Where science progresses through revision, correction, and refinement, creationism survives only by shutting the door on anything that might disturb its predetermined conclusions. In this sense, creationists fall behind not only the scientific community but also behind chimpanzees — and even toddlers — in basic rational responsiveness.

The contrast could not be clearer. When faced with new facts, chimpanzees update their beliefs; creationists update their excuses. The study’s findings serve as a reminder that the strength of science lies not in inflexible certainty but in the courage to revise, rethink, and improve our understanding of the world.

If humans evolved intelligence, why are there still creationists?

Wednesday, 19 November 2025

Creationism Refuted - 40,000-Year-Old Woolly Mammoth RNA


One of Yuka’s legs, illustrating the exceptional preservation of the lower part of the leg after the skin had been removed, which enabled recovery of ancient RNA molecules.

Photo: Valeri Plotnikov.
The world’s oldest RNA extracted from woolly mammoth - Stockholm University

Scientists led by researchers from Stockholm University, Denmark, have just announced that they have successfully extracted RNA from 40,000-year-old mammoth remains — the oldest RNA ever obtained. This shows that not only DNA but also RNA can persist for extraordinary lengths of time under the right conditions, adding yet more to the mountain of evidence that undermines creationist claims. With preserved RNA, researchers can even reconstruct the DNA that originally served as its template, effectively giving scientists two independent avenues for recovering genetic information.

One of the joys of debunking creationism — a childish superstition when set beside the rigour of evolutionary biology — is the sheer abundance of evidence. Almost every peer-reviewed paper in biology, geology, palaeontology, cosmology, and the other natural sciences demonstrates, in one way or another, the reality of evolution and the age of the Earth, and presents verifiable results that creationism simply cannot accommodate.

Even psychology lends its weight. Not only does it support an evolutionary account of human cognition and intelligence, but it also helps explain why creationists cling so tightly to demonstrably false beliefs. For many, rejecting evidence becomes a test of loyalty or personal strength, with scientific data treated as part of a supposed conspiracy designed to shake their faith. If they can cling to their faith despite the overwhelming contrary evidence, then they must really believe it.

Adding this new discovery to the existing evidence is rather like tossing a pebble onto Mount Everest and expecting creationists to accept the mountain’s existence because a pebble lies on it. Such acceptance is impossible for the committed creationist, since that would mean yielding to the ‘evil conspiracy’ and admitting that their favourite holy book is not a perfect, divinely authored scientific text, but a compilation of Bronze Age and Early Iron Age mythology, created by people doing their best to explain a world they did not yet understand.

Monday, 17 November 2025

Creationism Refuted - Doggy Dos For Creationists


Dogs 10,000 years ago roamed with bands of humans and came in all shapes and sizes

This is the second article in The Conversation which incidentally refutes creationism and shows us why the Bible must be dismissed as a source book for science and history on the basis that, when compared to reality, it's stories are not just wrong; they're not even close.

This one deals with essentially that same subject as my last past - the evolution of all the different dog varieties since wolves were first domesticated some 11,000 years ago. Together with all the other canids that creationists insist are all dog 'kind', including several foxes, several subspecies of wolf, coyotes, jackals, and African wild dogs, the hundreds of different recognised breeds of dog could not conceivably have arisen from a single pair and the resulting genetic bottleneck just a few thousand years ago. Moreover, we are expected to believe that in that short space of time, all the canids evolved from being vegetarian (with canine teeth, meat-cutting incisors and bone-crushing molars, apparently) to being obligate carnivores.

As well as the paper that was the subject of my last blog post, this The Conversation article mentions another paper, also published in Science by palaeontologists led by Shao-Jie Zhang from the Kunming Institute of Zoology, China. This paper draws on DNA evidence from ancient Eastern Eurasian dogs.

The article by Kylie M. Cairns, a Research Fellow in Canid and Wildlife Genomics, UNSW Sydney, Australia and Professor Melanie Fillios of the Department of Archaeology and Palaeoanthropology, University of New England, USA. Their article is reprinted here under a Creative |Commons licence, reformatted for stylistic consistency.

Friday, 14 November 2025

How Science Works - Not Abandonning Evolution - Refining Our Understanding Of It


A new theory of molecular evolution | University of Michigan News

A new paper in Nature Ecology & Evolution by a research team at the University of Michigan, led by evolutionary biologist, Professor Jianzhi Zhang, comprehensively, but incidentally, refutes several common creationist claims — such as that mainstream biologists are abandoning evolution because it supposedly cannot explain the evidence, that all mutations are harmful, so cannot underpin evolution, and that scientists are prevented from publishing findings that challenge orthodoxy.

The study examines a key assumption of the Neutral Theory of Molecular Evolution — namely that most amino-acid substitutions are neutral (neither beneficial nor strongly deleterious) and fix by drift rather than selection. The authors report experimental data showing that in mutational-scanning assays of over 12,000 amino-acid-altering mutations across 24 genes, >1 % of mutations were beneficial, implying a far higher beneficial-mutation rate than is conventionally assumed.

To reconcile that finding with the fact that comparative genomic data appear consistent with many substitutions being neutral, Zhang’s team propose a new model — “adaptive tracking with antagonistic pleiotropy” — in which beneficial mutations are frequently environment-specific, and when the environment changes the same mutation may become deleterious, hence failing to fix. In this way, although beneficial mutations are common, they rarely reach fixation when environments shift, and substitution patterns can appear neutral.

The paper operates fully within the framework of evolutionary theory by natural selection: it does not challenge evolution itself, but refines a subsidiary theoretical model about molecular changes. Thus, it strengthens the broader evolutionary paradigm rather than undermines it.

Average time to fixation of a mutation under different environments. To keep things simple, assume a standard Wright–Fisher population with effective size \(\small(N_e)\), diploid, with a single new copy of the allele arising in one generation.
  1. Neutral mutation in a neutral environment
    A new neutral mutation starts at frequency
    \[(p_0 = 1/(2N_e)).\]
    Two classic results:
    • Probability that a neutral mutation eventually fixes: \[(P_{\text{fix, neutral}} = p_0 = 1/(2N_e)).\]
    • Expected time to fixation given that it does fix (diffusion approximation): \[(\bar T_{\text{fix, neutral}} \approx 4N_e)\ \text{generations}.\]

    So, for a neutral mutation that happens to win the drift lottery, the typical time-scale to drift to fixation is of order \(\small(4N_e)\) generations.
  2. Mutation that is beneficial half the time and deleterious half the time
    Now suppose the same mutation experiences:
    • selection coefficient (+s) in environment A,
    • selection coefficient (-s) in environment B,
    • with the population spending 50% of generations in each environment.

    On average, the selection coefficient is zero:
    \[(\bar s = \tfrac12(+s) + \tfrac12(-s) = 0),\]
    so in a first approximation the allele is time-averaged neutral. However, it is not truly neutral – it is sometimes favoured and sometimes disfavoured. That fluctuation in selection has two important consequences:
    • The probability of fixation is typically lower than for a strictly neutral mutation, because periods in the “bad” environment (–s) tend to undo gains made in the “good” environment (+s).
    • The distribution of times to absorption (loss or fixation) is broader, and the mean time to fixation, conditional on fixation, is generally longer and more variable than the simple \(\small(4N_e)\) rule.

    Crucially, there is no neat closed-form equivalent to \(\small(\bar T_{\text{fix}} \approx 4N_e)\) in this fluctuating case: the mean time to fixation depends on:
    • the population size \(\small(N_e)\),
    • the magnitude of (s), and
    • how fast the environment flips between A and B.

    In practice, one usually estimates the fixation probability and mean fixation time in such a ±s, 50/50 scenario either by:
    • solving the diffusion (backward Kolmogorov) equations for allele frequency with a stochastic selection term, or
    • simulating a Wright–Fisher (or Moran) population in which the environment changes over time and recording how long successful mutations take to fix.

This is exactly the sort of situation considered in the Zhang et al. paper: a mutation that is advantageous in one environment but disadvantageous in another may arise fairly often, yet fail to become fixed because the population spends much of its time in the “wrong” environment.
A summary of the research is available in a University of Michigan News article.
A new theory of molecular evolution
For a long time, evolutionary biologists have thought that the genetic mutations that drive the evolution of genes and proteins are largely neutral: they’re neither good nor bad, but just ordinary enough to slip through the notice of selection.

Now, a University of Michigan study has flipped that theory on its head.

In the process of evolution, mutations occur which can then become fixed, meaning that every individual in the population carries that mutation. A longstanding theory, called the Neutral Theory of Molecular Evolution, posits that most genetic mutations that are fixed are neutral. Bad mutations will be quickly discarded by selection, according to the theory, which also assumes that good mutations are so rare that most fixations will be neutral, says evolutionary biologist Jianzhi Zhang.

The U-M study, led by Zhang, aimed to examine whether this was true. The researchers found that so many good mutations occurred that the Neutral Theory cannot hold. At the same time, they found that the rate of fixations is too low for the large number of beneficial mutations that Zhang’s team observed.

To resolve this, the researchers suggest that mutations that are beneficial in one environment may become harmful in another environment. These beneficial mutations may not become fixed because of frequent environmental changes. The study, supported by the U.S. National Institutes of Health, was published in Nature Ecology and Evolution.

We’re saying that the outcome was neutral, but the process was not neutral. Our model suggests that natural populations are not truly adapted to their environments because environments change very quickly, and populations are always chasing the environment.

Professor Jianzhi Zhang, corresponding author Department of Ecology and Evolutionary Biology
University of Michigan
Ann Arbor, MI, USA.

Zhang says their new theory, called Adaptive Tracking with Antagonistic Pleiotropy, tells us something about how well all living things are adapted to their environments.

I think this has broad implications. For example, humans. Our environment has changed so much, and our genes may not be the best for today’s environment because we went through a lot of other different environments. Some mutations may be beneficial in our old environments, but are mismatched to today.

At any time when you observe a natural population, depending on when the last time the environment had a big change, the population may be very poorly adapted or it may be relatively well adapted. But we’re probably never going to see any population that is fully adapted to its environment, because a full adaptation would take longer than almost any natural environment can remain constant.

Professor Jianzhi Zhang.

The Neutral Theory of Molecular Evolution was first proposed in the 1960s. Previously, scientists studied evolution based on the morphology and physiology, or appearance, of organisms. But starting in the 1960s, scientists were able to start sequencing proteins, and later, genes. This prompted researchers to look at evolution at the molecular level.

To measure beneficial mutation rates, Zhang and colleagues investigated large deep mutational scanning datasets produced by his and other labs. In this kind of scanning, the scientists created many mutations on a specific gene or region of the genome in model organisms such as yeast and E. coli.

The researchers then followed the organism over many generations, comparing them against the wild type, or the most common version existing in nature, of the organisms. This allowed the researchers to measure their growth and compare their growth rate to the wild type, which is how they estimated the effect of the mutation.

They found that more than 1% of mutations are beneficial, orders of magnitude greater than what the Neutral Theory allows. This amount of beneficial mutations would lead to more than 99% of fixations being beneficial and a rate of gene evolution that is much higher than the rate that is observed in nature. The researchers realized they had made a mistake in assuming an organism’s environment remained constant.

To investigate the impacts of a changing environment, Zhang’s research team compared two groups of yeast. One group evolved in a constant environment for 800 generations (each generation lasted 3 hours), while the second group evolved in a changing environment, in this case composed of 10 different kinds of media, or solution, that the yeast grew in. The second yeast group grew in the first media for 80 generations, in the second media for another 80 generations, and so on, for a total of 800 generations as well.

The researchers found that there were far fewer beneficial mutations in the second group compared to the first. Although the beneficial mutations occurred, they didn’t have a chance to become fixed before the environment shifted.

This is where the inconsistency comes from. While we observe a lot of beneficial mutations in a given environment, those beneficial mutations do not have a chance to be fixed because as their frequency increases to a certain level, the environment changes. Those beneficial mutations in the old environment might become deleterious in the new environment.

Professor Jianzhi Zhang.

However, Zhang says there is a caveat: The data they used came from yeast and E. coli, two unicellular organisms in which it’s relatively easy to measure the fitness effects of mutations. Deep mutational scanning data collected from multicellular organisms would tell whether their findings from unicellular organisms apply to multicellular organisms such as humans. Next, the researchers are planning a study to understand why it takes so long for organisms to fully adapt to a constant environment.

Other authors of the study include former U-M graduate students Siliang Song and Xukang Shen and former U-M postdoctoral researcher Piaopiao Chen.

Publication:
Abstract
The neutral theory of molecular evolution, positing that most amino acid substitutions in protein evolution are neutral, is supported by vast comparative genomic data. However, here we report that the key premise of the theory—beneficial mutations are extremely scarce—is violated. Deep mutational scanning data from 12,267 amino acid-altering mutations in 24 prokaryotic and eukaryotic genes reveal that > 1% of these mutations are beneficial, predicting that > 99% of amino acid substitutions would be adaptive. This observation demands a new theory that is compatible with both the high beneficial mutation rate and the comparative genomic data considered consistent with the neutral theory. We propose such a theory named adaptive tracking with antagonistic pleiotropy. In this theory, virtually all beneficial mutations observed are environment specific. Frequent environmental changes and mutational antagonistic pleiotropy across environments render most of the beneficial mutations seen at one time deleterious soon after and hence rarely fixed. Consequently, despite the occurrence of adaptive tracking—continuous adaptation to a changing environment fuelled by beneficial mutations—neutral substitutions prevail. We show that this theory is supported by population genetics simulation, empirical observations and experimental evolution and has implications for the adaptedness of natural populations and the tempo and mode of evolution.

Apart from the obvious point that the researchers show absolutely no sign of finding the Theory of Evolution unfit for purpose—let alone turning to creationism, as creationist leaders have been assuring their followers is just about to happen - for more than half a century — there are other less obvious aspects of this paper that should give creationists pause.

First, the study highlights the close relationship between the environment and whether a mutation is beneficial, deleterious or truly neutral. These terms describe how well an organism survives and reproduces under particular conditions: change the environment, and exactly the same mutation can have an entirely different effect. This is precisely what Darwin proposed.

As with all good science, the need for slight adjustment doesn't invalidate the entire science, it strengthens it. The Theory of Evolution is strengthened rather than weakened by refinements like this that improve our understanding of the details. Minor adjustments to subsidiary theories help clarify the details of how evolution works; they do not threaten the foundations of the overarching theory on which modern biology depends, and they do nothing to justify the creationist tactic of falsely presenting the dichotomy of their superstition as the only alternative choice to that of an allegedly 'failed' scientific theory, without providing the slightest scrap of testable evidence.

As Thomas Henry Huxley is reputed to have exclaimed on reading Darwin’s Origin of Species for the first time: “How stupid not to have thought of it oneself!” Nearly 170 years later, how much more foolish it is to cling to a fairy tale that explains nothing, makes no testable predictions and is unfalsifiable because it relies on magic, essentially because it happens to be the belief you were raised with, when the Theory of Evolution provides a coherent, predictive and experimentally supported account of the living world.

Refuting Creationism - Cambrian Fossils Confirm The Bible Is Wrong.



Salterella in longitudinal section, showing biomineralized outer shell (blue arrow), agglutinated material (red arrow) and the boundary between the agglutinated layer and the shell near the apex (white arrows),

Interbedded fine-grained clastic and carbonate strata of the lower Illtyd Formation, Wind River, Yukon, Canada, that locally contain Salterella.
A skeleton and a shell? Ancient fossil finally finds home on the tree of life | Virginia Tech News | Virginia Tech

As though fossils from half a billion years before their mythical “Creation Week” weren’t awkward enough for creationists, this latest find slips neatly into the tree of life and closes a small but meaningful gap in our understanding of how protective shells evolved. In doing so, it undermines more creationist claims than they might care to consider.

A research team led by Prescott J. Vayda of Virginia Tech has shown that the enigmatic fossils Volborthella and Salterella, long puzzling palaeontologists, are in fact early cnidarians — members of the group that includes corals, jellyfish, and sea anemones. These organisms are united by their stinging cells, which they use to subdue prey. Even more troublesome for creationists, the structure of the earlier Volborthella shell strongly suggests a transitional relationship with the more complex shell of Salterella, hinting at an evolutionary sequence between the two.

The team’s findings have just been published in the Journal of Paleontology.

The Cambrian period was defined by the emergence of mobility and, with it, true predation. These new ecological dynamics sparked evolutionary “arms races”, driving rapid diversification in both offensive and defensive strategies: sensory structures, spines, shells, and behaviours such as burrowing. These early cnidarians provide an important glimpse into how some of the earliest protective shells came to be.

Such evolutionary arms races also offer yet another reason to dismiss the notion of an intelligent designer. No competent designer would turn yesterday’s solution into today’s problem — yet that is precisely what we see in nature, where improvements in predators prompt improvements in prey, and vice versa. It’s exactly what one would expect from an unguided evolutionary process with no foresight, driven solely by differential survival and reproduction.

Friday, 24 October 2025

How Science Works - Biologists Might Need To Rethink A Detail Of Evolutionary Biology

Details of the surface of two sheet-like colonies of the ‘Berenicea’ type: (A) In Hyporosopora dilatata, the colony surface is relatively flat, save for the slightly convex zooids and faint growth lines (Upper Callovian or Lower Oxfordian, Oxford Clay; Stanton Harcourt, Oxfordshire); and (B) Well-defined transverse ridges cross the colony surface in Rugosopora enstonensis (Bathonian, Hampen Marly Beds; Enstone, Oxfordshire). Scale bars are 500µm.

New Study Reveals Berenicea Zooid Size Reduction Over 200 Million Years Contradicts Cope's Rule----Chinese Academy of Sciences

The discovery that a group of organisms has, contrary to “Cope’s Rule,” undergone a steady reduction in body size over the past 200 million years is a useful reminder of how science works — and why religion so often falters.

A cornerstone of the scientific method is its willingness to acknowledge error. Real intellectual strength lies not in clinging to discredited beliefs as though doing so were a test of character, but in facing up to mistakes, learning from them, and changing one’s mind. That is how knowledge advances.

Religion, by contrast, remains shackled to the dogmas of its ancient founders. To alter those fundamental beliefs is, in effect, to abandon the religion itself. This is why, while science has sent probes into deep space and placed human beings on the Moon, faith — despite lofty claims of being able to “move mountains” — has yet to lift so much as a feather a millimetre off the ground.

The new finding was just reported in the journal Palaeontology by Associate Professor MA Junye of the Nanjing Institute of Geology and Paleontology at the Chinese Academy of Sciences (NIGPAS) and collaborators. They found that Berenicea, a genus of cyclostome bryozoans, has experienced a continuous reduction in zooid size over the past 200 million years. This runs counter to “Cope’s Rule,” which describes a tendency for body size to increase during the evolution of many lineages.

Cope’s Rule was formulated by the American palaeontologist Edward Drinker Cope (1840–1897). There are, of course, well-known exceptions — such as the “island effect,” where animals isolated on small islands often evolve into miniature versions of their mainland relatives — but these are localised adaptations to particular environments. Cope’s Rule, by contrast, applies to long-term, broad-scale evolutionary trends.

Sunday, 5 October 2025

Creationism in Crisis - A Transitional Lizard-Snake - From 167 Million Years Before 'Creation Week'

[left caption]
[right caption]

a, Life reconstruction of Breagnathair elgolensis based on measured proportions of NMS G.2023.7.1. b, Digital render of the bones as originally preserved in NMS G.2023.7.1, using information from the pilot scan (Supplementary Data 1 and 2). c–f, Digital renders of cervical vertebra (CEb in Extended Data Fig. 5) in left lateral (c), ventral (d), anterior (e) and posterior (f) views. g–i, Caudal vertebra (CAa in Extended Data Fig. 5) in left lateral (g), ventral (h) and anterior (i) views. Scale bars: 50 mm (b), 2 mm (c–i). Life reconstruction reproduced with permission from Mick Ellison (American Museum of Natural History).
New Species of Ancient Hook-toothed Reptile Discovered | AMNH

A newly described Jurassic fossil from the Isle of Skye, Scotland, has revealed a remarkable “missing link” between lizards and snakes. The find, named Breugnathair elgolensis, provides important evidence of snake evolution and further undermines creationist claims that no transitional forms exist. The research has just been published in Nature and reported by the American Museum of Natural History.

For creationists, this week must feel much like any other, as science continues to produce paper after paper that refutes their beliefs, while not a single one provides a shred of evidence in support of creationism — whether young-Earth or old-Earth, whether invoking an interventionist deity who micro-manages every detail of the universe, or a distant creator who merely lit the blue touch-paper and now sits back to watch the results.

Science, of course, concerns itself only with material reality. It has no use for evidence-free superstitions or fairy tales of the supernatural — notions born of human imagination and the desire for narrative to fill the gaps in our knowledge and understanding. Creationists, therefore, must rely on self-delusion and the irrational belief in a false dichotomy of “facts versus faith”, where even the slightest perceived flaw in science supposedly means total failure and victory for faith by default.

Sadly for creationists, that long-dreamed-of day when science collapses and their god descends triumphantly from the skies in a chariot — looking for all the world like a Bronze Age tribal despot — seems increasingly remote. Science continues to validate the scientific method and to build knowledge upon verifiable evidence, always willing to revise and refine its understanding in light of new discoveries. One such discovery is that of a transitional Jurassic reptile showing a mosaic of lizard and snake features — exactly what we would expect if snakes and lizards share a common ancestor. The problem with pinning one’s hopes on a false dichotomy that depends on science failing is that every new discovery only strengthens science and renders the alternative ever more irrelevant and untenable.

The troublesome fossil for creationists was discovered about ten years ago on the Isle of Skye, in the Inner Hebrides off Scotland’s west coast, by Roger Benson, Macaulay Curator of the American Museum of Natural History, and his colleagues. Named Breugnathair elgolensis — a Latinised form of the Scots Gaelic for “false snake of Elgol” — it has now been described in an open-access paper in Nature.

Thursday, 2 October 2025

Uninteligent Design - How The Process of Germ Cell Production Goes Wrong And Creates Genetic Defects.

Paired chromosomes showing crossovers in a mouse oocyte.
Hunter lab

Left panel: short green irregular lines arranged in pairs. Right: Close up of one pair shows that the two strands form a cross shape. Paired chromosomes showing crossovers in a mouse oocyte.
Hunter lab.
Landmark Discovery Reveals How Chromosomes Are Passed From One Generation to the Next | UC Davis

This article continues my series exploring the many ways in which the human body demonstrates unintelligent design. Far from being the perfect handiwork of a benevolent creator, our anatomy and physiology are full of flaws, inefficiencies, and dangerous vulnerabilities. Each of these makes sense in light of evolution by natural selection—an opportunistic, short-term process that tinkers with existing structures—but they make no sense at all if we are supposed to be the product of an all-wise designer.

Creationists often argue from a position of ignorant incredulity, claiming that complexity implies intelligent design, when in fact the opposite is true. The hallmark of good, intelligent design is simplicity, for two very simple reasons: first, simple things are easier to construct and require fewer resources; and second, simple structures and processes have fewer potential points of failure, making them more reliable.

In short: complexity is evidence against intelligent design and in favour of a mindless, utilitarian, natural process such as evolution.

In addition to being minimally complex, another characteristic we would expect of something designed by an omniscient, maximally intelligent, and benevolent designer is that the process should work perfectly, every time, without fail.

The problem for creationists is that their favourite example of supposed intelligent design — the human body — is riddled with complexity in both its structures and processes. This complexity provides countless examples of systems that fail to perform adequately, or fail altogether, with varying frequency. Many failures occur in the layers of complexity needed to control or compensate for the inadequacies of other systems, and when those compensatory mechanisms themselves fail, the result can be a cascade of dysfunctions or processes running out of control. The consequences manifest as diseases, defects, and disabilities — hardly the work of an all-wise designer.

They are, however, exactly what we would expect from a mindless, utilitarian process like evolution, which prioritises short-term survival and reproduction, selecting only what is better — sometimes only marginally better — than what preceded it, rather than seeking optimal solutions. I have catalogued many such suboptimal compromises in the anatomy and physiology of the human body, and the problems that arise from them, in my book, The Body of Evidence: How the Human Body Refutes Intelligent Design, one of my Unintelligent Design series.

Just yesterday, I wrote about research suggesting that autism may be a by-product of the rapid evolution of intelligence in humans. Now we have another striking example of extreme biological complexity which, when it goes wrong, can have catastrophic consequences: the production of eggs in women and sperm cells in men.

Sunday, 28 September 2025

Malevolent Designer News - How Candida Albicans (Thrush) Is Cleverly Designed to Infect Your Mouth - Evolution Or Malevolent Design?

The yeast fungus Candida albicans (blue) breaks out of human immune cells (red) by forming long thread-like cells called hyphae. The part of the hypha that has already left the immune cells is coloured yellow.
© Erik Böhm, Leibniz-HKI

The dose makes the difference - Leibniz-HKI

As has often been pointed out in these blog posts, the "evidence" offered by Discovery Institute fellows William A. Dembski and Michael J. Behe for an intelligent designer can, by the same logic and using the same evidence, be interpreted as pointing to a theologically awkward malevolent designer. This is a line of reasoning routinely ignored by the "Cdesign proponentcists", who prefer to overlook the many examples of parasites and pathogens—and the evolutionary traits that make them so successful at invading and surviving within their hosts.

A fresh example that creationists will either have to ignore or blame on "The Fall" comes from researchers at the Leibniz Institute for Natural Product Research and Infection Biology. They have shown that the fungus Candida albicans, which causes thrush, has evolved a highly sophisticated and "finely tuned" mechanism for infecting the human mouth while evading the immune system.

The stock creationist response is to shift responsibility onto the biblical myth of "The Fall," retreating into Bible literalism. Yet this is precisely the kind of literalism the Discovery Institute has been at pains to insist is not essential to the notion of intelligent design, which it markets as a scientific alternative to evolutionary theory—or "Darwinism," as they prefer to call it. This rhetorical sleight of hand was central to the Institute’s "Wedge Strategy," devised after the 1987 US Supreme Court ruling in Edwards v. Aguillard, which confirmed that teaching creationism in public schools violated the Establishment Clause of the First Amendment.

The new research reveals that C. albicans produces a toxin called candidalysin in carefully regulated doses that allow it to infiltrate the mucous lining of the mouth. Too little candidalysin, and the fungus would fail to establish itself; too much, and it would trigger an immune response strong enough to destroy it. Normally, C. albicans exists in a round, yeast-like form, but under the "right" conditions it can switch into the filamentous hyphal form typical of fungi. This transformation allows it to penetrate host tissues and, in immune-compromised patients, become life-threatening. It is in this invasive hyphal state that C. albicans produces candidalysin.

The production of hyphae, and therefore candidalysin, is controlled by the gene EED1. By any definition, EED1 would qualify as an example of "complex specified information" according to Dembski’s own formulation — evidence, according to the Discovery Institute, of supernatural intelligent design.

Monday, 1 September 2025

Malevolent Design - A Paradox Creationists Pretend Not to See

The ancient city of Jerash, Jordan, epicentre of the Justinian Plague

Progress of the Black Death in Europe

USF, FAU researchers solve 1,500-year-old mystery: The bacterium behind the first pandemic

The notion of intelligent design — the current flagship of creationism’s attempt to replace scientific realism with magical superstitions and Bible literalism dressed up as “alternative science” — contains a blatant paradox its advocates must ignore: the very same “logic” used to argue that the God of the Bible created living organisms can just as easily be used to argue that any such designer is a malevolent sadist who deliberately increases suffering in the world while ignoring countless ways to reduce it.

The theological problems this raises are never discussed in polite creationist circles, except for the lazy fallback of blaming everything on “The Fall.” But this move exposes intelligent design for what it really is — Bible-literalist religion in disguise. And that sits awkwardly against over half a century of insistence by the Discovery Institute that ID is not a religious idea, but rather a scientific one that should be taught in American public schools at taxpayer expense — a direct violation of the Establishment Clause and the U.S. Supreme Court’s ruling in Edwards v. Aguillard (1987).

The paradox lies in the fact that the very same so-called evidence — Michael J. Behe’s “irreducible complexity” and William A. Dembski’s “complex specified genetic information” — can be found in the genomes, structures, and processes of parasites and pathogens, making them devastatingly effective at exploiting and destroying their hosts. In fact, Behe himself has, probably without realising it, used precisely such examples. The bacterial flagellum he highlights enables E. coli to move efficiently through our gut, causing sometimes fatal food poisoning. And his example of resistance to anti-malarial drugs in Plasmodium parasites illustrate how evolution equips them to continue killing hundreds of thousands of children every year while condemning millions more to cycles of malarial fever.

Now, new research has highlighted another gruesome example. The bacterium Yersinia pestis — responsible for multiple waves of plague throughout the Middle Ages — has been shown to have evolved into its highly lethal form only in relatively recent human history. Beginning with the “Plague of Justinian” about 1,500 years ago, Y. pestis unleashed pandemics that killed between 30% and 50% of Europe’s population.

An interdisciplinary team at the University of South Florida (USF) and Florida Atlantic University (FAU), with collaborators in India and Australia, has now confirmed genomically that the Justinian plague was indeed caused by Y. pestis, as long assumed. Analysing DNA from plague victims buried in a mass grave at the ancient city of Jerash, Jordan — the epicentre of that pandemic — one group identified the culprit, while another team traced the bacterium’s evolutionary changes that made it one of history’s most notorious killers.

Tuesday, 12 August 2025

Malevolent Design - How 'Intelligent Design' Exposes Divine Malevolence

Schistosoma mansoni

Schistosoma mansoni
Parasitic Worms Evolved to Suppress Neurons in Skin - AAI News

It gets tedious repeating this point so often, but so long as creationists keep using what they claim is irreducible complexity and/or complex specified genetic information as evidence for intelligent design, they need to be reminded that the same argument can also be used as evidence of their putative designer’s malevolence.

Creationists, of course, ignore the fact that parasites are no less “designed” than humans and have structures and processes that are “irreducibly complex” and depend on “complex specified information” in order to succeed in their environments. Yet their existence, and how they interact with and even manipulate their hosts, inevitably increases suffering in the world – a theological problem that creationist disinformation organisations such as the Discovery Institute avoid like the plague.

Parasite–host relationships also inevitably involve evolutionary arms races – the antithesis of intelligence if both “sides” are supposedly designed by the same designer.

So, to keep reminding them: if their justification for designating their god as the designer of living systems holds true, then it is also justification for designating the same god as the cause of suffering. Here is another example of a parasite that falls within their definition of an organism “designed” to do what it does and to participate in an arms race with its host in order to do so. This concerns the discovery that the parasitic worm Schistosoma mansoni, which causes schistosomiasis, is able to suppress neurons in the skin to evade detection as it burrows into its victim’s body (usually the leg).

Monday, 11 August 2025

Refuting Creationism - Just How Wrong Could The Bible's Authors Be?

The Cosmic Horseshoe gravitational lens.
Credit: NASA/ESA (CC BY 4.0)


'Most massive black hole ever discovered' is detected | The Royal Astronomical Society

The authors of Genesis got so much so badly wrong that it’s difficult to find anything they got right — but the hardest place to find even a sliver of accuracy is their description of the universe. With their naïve attempt to explain the existence of different kinds of animals, they at least recognised that there were different species. Their notion of magical creation out of nothing, without ancestry, was of course laughably wrong, but at least they knew there were distinct organisms requiring explanation.

By contrast, in their picture of the cosmos — centred on a small, flat world with a solid dome (the “firmament”) over it—about the only things they got right were the existence of Earth, the Sun and Moon, and “the stars”. Everything else was subsumed into that one word: “stars”, a bucket that included the visible planets, distant suns, and entire galaxies, all imagined as lights fixed to the dome, with the Sun and Moon set within it.

In short, almost everything in that description is wrong—not just what things are, but where they are. They spoke about light, but knew nothing of its nature. That they noticed that light comes from luminous bodies is probably the only thing they got right.

Black Holes: Nature’s Most Extreme Objects. A black hole is a region of spacetime where gravity is so intense that nothing—not even light—can escape. They form when a massive star collapses under its own gravity or through the merger of smaller black holes.

Event Horizon

The vent horizon is the “point of no return” surrounding a black hole. Once anything crosses it, escape is impossible. From outside, the event horizon appears as a dark sphere; it’s not a physical surface but a boundary defined by relativity.

Singularity

At the very centre, according to general relativity, lies a singularity — a point where density and spacetime curvature become infinite, and the known laws of physics break down. In reality, quantum effects are expected to smooth out this infinity, but a complete theory of quantum gravity is needed to describe it properly.

Relativity vs Quantum Physics

Black holes are unique because they combine two regimes of physics:
  • Einstein’s general relativity describes how they warp spacetime.
  • Quantum mechanics governs the behaviour of particles and energy at extremely small scales.

The crossover between these domains lies deep inside the black hole, in a region near the singularity sometimes called the quantum gravity zone, where spacetime curvature reaches the Planck scale and neither theory works alone. This is not the event horizon, as is sometimes said; the event horizon is still very much part of the Relativity domain.
The Firewall Hypothesis

Stephen Hawking and others noted a paradox: quantum theory predicts that information cannot be destroyed, yet anything crossing an event horizon seems lost forever. One proposed resolution is the firewall hypothesis: instead of passing smoothly through, anything hitting the horizon would be incinerated by a burst of high-energy radiation. This “firewall” would break relativity’s expectation that crossing the horizon is uneventful (for a large black hole) but would preserve quantum theory’s rules.
Open Questions
  • Does the singularity really exist, or is it replaced by something else in a quantum theory of gravity?
  • Do firewalls exist, or is there a different resolution to the black hole information paradox?
  • Can Hawking radiation—tiny energy leaks predicted by quantum field theory—eventually cause black holes to evaporate completely?

Black holes remain one of physics’ most powerful testing grounds, where the deepest laws of nature are pushed to their limits.
And of course, they could have known nothing about black holes, or about the relationship between mass and gravity that explains them and governs the motions of the “stars”.

A point I’ve made here before — worth making again — is that we can be certain the Bible was not written by a creator god by seeing how much of it is flatly wrong. Much of it can’t even be rescued as meaningful metaphor or allegory—the standard apologetic for obvious falsehoods. It is simply, unarguably, and unambiguously wrong on multiple levels.

If a creator god had written it as a vital message to humankind, why did it not include anything unknown at the time in unmistakable terms, as proof of divine authorship and omniscience? Why, for example, did it not tell us about atoms, germs, or galaxies; that Earth is an oblate spheroid orbiting the Sun along with other planets; or explain the relationship between mass and gravity and why black holes exist?

Why not? Because the authors of the Bible were ignorant of these things. They were not creator gods, but ancient Near Eastern writers doing their best to invent plausible narratives within their cultural preconceptions — of a spirit-filled world that ran on magic — when everything they knew lay within a few days’ walk of home in the hills of Canaan.

So, compare their description of the universe as they imagined it with what science now shows us: in this case, an ultramassive black hole revealed by how its gravity bends light from a background galaxy into an “Einstein ring”, a phenomenon predicted by Einstein’s general theory of relativity.

The description comes from the Royal Astronomical Society news release and the open-access paper in Monthly Notices of the Royal Astronomical Society.

First, let's see how the Bible's author described the entire universe as they saw it without the benefit of scientific instruments or theoretical physics:

And God said, Let there be a firmament in the midst of the waters, and let it divide the waters from the waters. And God made the firmament, and divided the waters which were under the firmament from the waters which were above the firmament: and it was so. And God called the firmament Heaven. And the evening and the morning were the second day. And God said, Let the waters under the heaven be gathered together unto one place, and let the dry land appear: and it was so. And God called the dry land Earth; and the gathering together of the waters called he Seas: and God saw that it was good. (Genesis 1.6-10)

And God made two great lights; the greater light to rule the day, and the lesser light to rule the night: he made the stars also. And God set them in the firmament of the heaven to give light upon the earth, And to rule over the day and over the night, and to divide the light from the darkness: and God saw that it was good.(Genesis 1.16-18)

Now compare that to this image of a tiny fragment of it that astronomers at the Royal Astronomical Society have just released. It shows the gravity lensing effect and the resulting Einstein ring. Ber in mind that this is a tiny fragment of the universe that would be entirely hidden by a grain of rice held between the thumb and forefinger of your outstretched arm. There is absolutely nothing to compare it with in the Bible, obviously.
'Most massive black hole ever discovered' is detected
Astronomers have discovered potentially the most massive black hole ever detected.

The cosmic behemoth is close to the theoretical upper limit of what is possible in the universe and is 10,000 times heavier than the black hole at the centre of our own Milky Way galaxy.

The Cosmic Horseshoe gravitational lens.
The newly discovered ultramassive blackhole lies at the centre of the orange galaxy. Far behind it is a blue galaxy that is being warped into the horseshoe shaped ring by distortions in spacetime created by the immense mass of the foreground orange galaxy.

Credit: NASA/ESA (CC BY 4.0)
It exists in one of the most massive galaxies ever observed – the Cosmic Horseshoe – which is so big it distorts spacetime and warps the passing light of a background galaxy into a giant horseshoe-shaped Einstein ring.

Such is the enormousness of the ultramassive black hole’s size, it equates to 36 billion solar masses, according to a new paper published today in Monthly Notices of the Royal Astronomical Society.

It is thought that every galaxy in the universe has a supermassive black hole at its centre and that bigger galaxies host bigger ones, known as ultramassive black holes.

This is amongst the top 10 most massive black holes ever discovered, and quite possibly the most massive. Most of the other black hole mass measurements are indirect and have quite large uncertainties, so we really don't know for sure which is biggest. However, we’ve got much more certainty about the mass of this black hole thanks to our new method.

Professor Thomas Collett, co-author
Institute of Cosmology and Gravitation
University of Portsmouth, Portsmouth, UK.

Researchers detected the Cosmic Horseshoe black hole using a combination of gravitational lensing and stellar kinematics (the study of the motion of stars within galaxies and the speed and way they move around black holes).

The latter is seen as the gold standard for measuring black hole masses, but doesn't really work outside of the very nearby universe because galaxies appear too small on the sky to resolve the region where a supermassive or ultramassive black hole lies.

[Adding in gravitational lensing helped the team] push much further out into the universe. We detected the effect of the black hole in two ways – it is altering the path that light takes as it travels past the black hole and it is causing the stars in the inner regions of its host galaxy to move extremely quickly (almost 400 km/s). By combining these two measurements we can be completely confident that the black hole is real.

Professor Thomas Collett.

This discovery was made for a 'dormant' black hole – one that isn’t actively accreting material at the time of observation. Its detection relied purely on its immense gravitational pull and the effect it has on its surroundings. What is particularly exciting is that this method allows us to detect and measure the mass of these hidden ultramassive black holes across the universe, even when they are completely silent.

Carlos Melo-Carneiro, lead author.
Instituto de Física
Universidade Federal do Rio Grande do Sul, Porto Alegre, Brazil.

Another image of the Cosmic Horseshoe, but with the pair of images of a second background source highlighted.
The faint central image forms close to the black hole, which is what made the new discovery possible.

NASA/ESA/Tian Li (University of Portsmouth) (CC BY 4.0).
The Cosmic Horseshoe black hole is located a long way away from Earth, at a distance of some 5 billion light-years.

Typically, for such remote systems, black hole mass measurements are only possible when the black hole is active. But those accretion-based estimates often come with significant uncertainties. Our approach, combining strong lensing with stellar dynamics, offers a more direct and robust measurement, even for these distant systems.

Carlos Melo-Carneiro.

The discovery is significant because it will help astronomers understand the connection between supermassive black holes and their host galaxies.

We think the size of both is intimately linked, because when galaxies grow they can funnel matter down onto the central black hole. Some of this matter grows the black hole but lots of it shines away in an incredibly bright source called a quasar. These quasars dump huge amounts of energy into their host galaxies, which stops gas clouds condensing into new stars.

Professor Thomas Collett.

Our own galaxy, the Milky Way, hosts a 4 million solar mass black hole. Currently it's not growing fast enough to blast out energy as a quasar but we know it has done in the past, and it may will do again in the future.

The Andromeda Galaxy and our Milky Way are moving together and are expected to merge in about 4.5 billion years, which is the most likely time for our supermassive black hole to become a quasar once again, the researchers say.

An interesting feature of the Cosmic Horseshoe system is that the host galaxy is a so-called fossil group.

Fossil groups are the end state of the most massive gravitationally bound structures in the universe, arising when they have collapsed down to a single extremely massive galaxy, with no bright companions.

It is likely that all of the supermassive black holes that were originally in the companion galaxies have also now merged to form the ultramassive black hole that we have detected. So we're seeing the end state of galaxy formation and the end state of black hole formation.

Professor Thomas Collett.

The discovery of the Cosmic Horseshoe black hole was somewhat of a serendipitous discovery. It came about as the researchers were studying the galaxy’s dark matter distribution in an attempt to learn more about the mysterious hypothetical substance.

Now that they’ve realised their new method works for black holes, they hope to use data from the European Space Agency’s Euclid space telescope to detect more supermassive black holes and their hosts to help understand how black holes stop galaxies forming stars.

Publication:
ABSTRACT
Supermassive black holes (SMBHs) are found at the centre of every massive galaxy, with their masses tightly connected to their host galaxies through a co-evolution over cosmic time. For massive ellipticals, the SMBH mass (\(\small ⁠M_\text{BH}\)⁠) strongly correlates with the host central stellar velocity dispersion (⁠\(\sigma_e\)⁠), via the relation. However, SMBH mass measurements have traditionally relied on central stellar dynamics in nearby galaxies (⁠\(\small z \lt 0.1\)⁠), limiting our ability to explore the SMBHs across cosmic time. In this work, we present a self-consistent analysis combining 2D stellar dynamics and lens modelling of the Cosmic Horseshoe gravitational lens system (⁠\(z_l = 0.44\)⁠), one of the most massive lens galaxies ever observed. Using MUSE integral-field spectroscopy and high-resolution Hubble Space Telescope imaging, we simultaneously model the radial arc – sensible to the inner mass structure – with host stellar kinematics to constrain the galaxy’s central mass distribution and SMBH mass. Bayesian model comparison yields a \(\small 5\sigma\) detection of an ultramassive black hole with \(\small \log _{10}(M_\text{BH}/{\rm M}_{\odot }) = 10.56^{+0.07}_{-0.08} \pm (0.12)^\text{sys}\)⁠, consistent across various systematic tests. Our findings place the Cosmic Horseshoe \(\small 1.5\sigma\) above the \(\small M_\text{BH}-\sigma_e\) relation, supporting an emerging trend observed in brightest cluster galaxies and other massive galaxies, which suggests a steeper \(\small M_\text{BH}-\sigma_e\) relationship at the highest masses, potentially driven by a different co-evolution of SMBHs and their host galaxies. Future surveys will uncover more radial arcs, enabling the detection of SMBHs over a broader redshift and mass range. These discoveries will further refine our understanding of the \(\small M_\text{BH}-\sigma_e\) relation and its evolution across cosmic time.

1 INTRODUCTION
Most massive galaxies are believed to host a supermassive black hole (SMBH) at their centre. More importantly, host galaxies and their SMBHs exhibit clear scaling relations, pointing to a co-evolution between the galaxy and the SMBH (Kormendy & Ho 2013). The SMBH mass (⁠\(\small M_{\text{BH}\)⁠) has been shown to correlate with various galaxy properties, such as the bulge luminosity (e.g. Magorrian et al. 1998; Marconi & Hunt 2003; Gültekin et al. 2009), stellar bulge mass (e.g. Laor 2001; McLure & Dunlop 2002), dark matter (DM) halo mass (e.g. Marasco et al. 2021; Powell et al. 2022), number of host’s globular clusters (e.g. Burkert & Tremaine 2010; Harris, Poole & Harris 2014), and stellar velocity dispersion (e.g. Gebhardt et al. 2000; Beifiori et al. 2009.1). Notably, the \(\small M_\text{BH}-\sigma_e\) relation, which links SMBH mass to the effective stellar velocity dispersion of the host (⁠\(\small \sigma_e\)⁠), remains tight across various morphological types and SMBH masses (van den Bosch 2016). None the less, when SMBHs accrete mass from their neighbourhoods, they can act as active galactic nuclei (AGNs), injecting energy in the surrounding gas in a form of feedback. This feedback can be either positive, triggering star formation (Ishibashi & Fabian 2012; Silk 2013.1; Riffel et al. 2024), or negative quenching galaxy growth (e.g. Hopkins et al. 2006; Dubois et al. 2013.2; Costa-Souza et al. 2024.1).

It is expected that the most massive galaxies in the Universe, such as brightest cluster galaxies (BCGs), host the most massive SMBHs. Indeed, so-called ultramassive black holes (UMBHs; \(\small M_\text{BH} \ge 10^{10}M_\odot\)⁠) have been found in such systems (e.g. Hlavacek-Larrondo et al. 2012.1). Most of these UMBHs have been measured through spatially resolved dynamical modelling of stars and/or gas. For instance, the UMBH in Holm 15A at \(\small z=0.055\) \(\small M_\text{BH} = (4.0 \pm 0.80) \times 10^{10}M_\odot\) (⁠⁠; Mehrgan et al. 2019) and the UMBH in NGC 4889 at \(\small z = 0.021\) (⁠\(\small M_\text{BH} = (2.1 \pm 1.6) \times 10^{10}M_\odot\)⁠; McConnell et al. 2012.2) were both determined using stellar dynamical modelling. However, despite the success of this technique in yielding hundreds of SMBH mass measurements, the requirement for high-quality spatially resolved spectroscopy poses significant challenges for studies at increasing redshift (see e.g. Kormendy & Ho 2013, Suplemental Material S1).

None the less, the significance of these UMBHs lies in the fact that many of them deviate from the standard linear \(\small M_\text{BH} - \sigma_e\) relation (e.g. Kormendy & Ho 2013; den Bosch 2016). This suggests either a distinct evolutionary mechanism governing the growth of the largest galaxies and their SMBHs (McConnell et al. 2011), leading to a significantly steeper relation (Bogdán et al. 2018), or a potential decoupling between the SMBH and host galaxy co-evolution. Populating the high-mass end of the \(\small M_\text{BH} - \sigma_e\) relation, particularly through direct \(\small M_\text{BH}\) measurements, could help resolve this ongoing puzzle.

Recently, Nightingale et al. (2023), by modelling the gravitationally lensed radial image near the the Abell 1201 BCG (⁠\(\small z=0.169\)⁠), was able to measure the mass of its dormant SMBH as \(\small M_\text{BH} = (3.27 \pm 2.12) \times 10^{10}M_\odot\)⁠, therefore an UMBH. This provides a complementary approach to other high-z probes of SMBH mass, such as reverberation mapping (Blandford & McKee 1982; Bentz & Katz 2015) and AGN spectral fitting (Shen 2013.3). Unlike these methods, which require active accretion and depend on local Universe calibrations, the lensing technique offers a direct measurement independent of the SMBH’s accretion state.

In this paper, we analyse the Cosmic Horseshoe gravitational lens system (Belokurov et al. 2007), where the lens galaxy is one of the most massive strong gravitational lenses known to date. The lens galaxy is an early-type galaxy (ETG) at redshift \(\small z_i = 0.44\)⁠, possibly part of a fossil group (Ponman et al. 1994), and is notable for lensing one of its sources into a nearly complete Einstein ring (the Horseshoe). Additionally, a second multiply imaged source forms a radial arc near the centre of the lens galaxy. Due to the radial image formed very close to the centre, the inner DM distribution of the Cosmic Horseshoe can be studied in detail, as done by Schuldt et al. (2019.1). By simultaneously modelling stellar kinematics from long-slit spectroscopy and the positions of the lensed sources, Schuldt et al. (2019.1) found that the DM halo is consistent with a Navarro–Frenk–White (NFW; Navarro, Frenk & White 1997) profile, with the DM fraction within the effective radius (⁠\(\small R_e\)⁠) estimated to be between 60 per cent and 70 per cent. Moreover, their models include a point mass at the galaxy’s centre, reaching values around \(\small \sim 10^{10} M_\odot\)⁠, which could represent an SMBH; however, they did not pursue further investigations into this possibility. Using new integral-field spectroscopic data from the Multi Unit Spectroscopic Explorer (MUSE) and imaging from the Hubble Space Telescope (HST), we conducted a systematic modelling of the Cosmic Horseshoe system to reassess the evidence for an SMBH at the heart of the lens galaxy. We performed a self-consistent analysis of both strong gravitational lensing (SGL) and stellar dynamics, which demonstrated that the presence of an SMBH is necessary to fit both data sets simultaneously. This paper is structured as follows: In Section 2, we present the HST imaging data and MUSE observations, along with the kinematic maps used for the dynamical modelling. Section 3 briefly summarizes the lensing and dynamical modelling techniques, including the multiple-lens-plane formalism, the approximations adopted in this work, and the mass profile parametrization. In Section 4, we present the results from our fiducial model and alternatives models, which we use to address the systematics on the SMBH mass. In Section 5 we discuss our results and present other astrophysical implications. Finally, we summarize and conclude in Section 6. Unless otherwise, all parameter estimates are derived from the final sampling chain, with reported values representing the median of each parameter’s one-dimensional marginalized posterior distribution, with uncertainties corresponding to the \(\small 16^\text{th}\) and \(\small 84^\text{th}\) percentiles. Furthermore, throughout this paper, we adopt the cosmological parameters consistent with Planck Collaboration XIII (2016.1): \(\small \Omega _{\Lambda ,0} = 0.6911\)⁠, \(\small \Omega _{\text{m},0} = 0.3089\)⁠, \(\small \Omega _{\text{b},0} = 0.0486\)⁠, and \(\small H_0 = 67.74\) \(\small \text{km}\ \text{s}^{-1}\ \text{Mpc}^\text{-1}\).

Carlos R Melo-Carneiro, Thomas E Collett, Lindsay J Oldham, Wolfgang Enzi, Cristina Furlanetto, Ana L Chies-Santos, Tian Li, (2025)
Unveiling a 36 billion solar mass black hole at the centre of the Cosmic Horseshoe gravitational lens,
Monthly Notices of the Royal Astronomical Society, 541(4), 2853–2871, https://doi.org/10.1093/mnras/staf1036

Copyright: © 2025 The Royal Astronomical Society.
Published by Oxford University Press. Open access.
Reprinted under a Creative Commons Attribution 4.0 International license (CC BY 4.0)
The discovery and analysis of black holes, and phenomena such as Einstein rings, would have been utterly incomprehensible to the authors of the Bible. These were people with no concept of galaxies, the vastness of the universe, or even that Earth is a sphere orbiting the Sun. Their worldview was of a flat Earth covered by a solid dome, with the Sun, Moon, and “stars” fixed to it. The very idea of light being bent by gravity, or of objects so massive that even light cannot escape, would have been as far beyond their imagination as quantum mechanics itself.

When we compare their primitive cosmology with what modern science reveals—billions of galaxies, relativistic spacetime, the quantum-scale behaviour of matter, and black holes bending light into perfect circles—the contrast could not be more stark. The biblical description is not merely simplified; it is wrong on almost every measurable level. It has Earth at the centre, the stars as small lights, and the sky as a hard surface holding back water. Science, by contrast, uncovers a cosmos governed by consistent natural laws, tested and confirmed through observation and mathematics.

This is compelling evidence that an omniscient creator god did not write the Bible. If it had done, it could have contained truths about the nature of the cosmos that were unknown at the time, expressed in terms clear enough to be recognisable today—atoms, germs, the vastness of space, or even the basic structure of the solar system. Instead, what we find are the assumptions of scientifically illiterate Bronze Age people, drawing on local myths and imagination. The difference between their errors and the precision of modern astrophysics is not a matter of interpretation—it is a matter of fact.
Web Analytics