HealthCommentary

Exploring Human Potential

Will Dobb-ists Infect Clinical Protocols?

Posted on | February 28, 2024 | 4 Comments

Mike Magee

For most loyalist Americans at the turn of the 19th century, Justice John Marshall Harlan’s decision in Jacobson v. Massachusetts (1905). was a “slam dunk.” In it, he elected to force a reluctant Methodist minister in Massachusetts to undergo Smallpox vaccination during a regional epidemic or pay a fine. 

Justice Harlan wrote at the time: “Real liberty for all could not exist under the operation of a principle which recognizes the right of each individual person to use his own, whether in respect of his person or his property, regardless of the injury that may be done to others.” 

What could possibly go wrong here? Of course, citizens had not fully considered the “unintended consequences,” let alone the presence of President Wilson and others focused on “strengthening the American stock.” 

This involved a two-prong attack on “the enemy without” and “the enemy within.”

The The Immigration Act of 1924, signed by President Calvin Coolidge, was the culmination of an attack on “the enemy without.” Quotas for immigration were set according to the 1890 Census which had the effect of advantaging the selective influx of Anglo-Saxons over Eastern Europeans and Italians. Asians (except Japanese and Filipinos) were banned.

As for “the enemy within,” rooters for the cause of weeding out “undesirable human traits” from the American populace had the firm support of premier academics from almost every elite university across the nation. This came in the form of new departments focused on advancing the “Eugenics Movement,” an excessively discriminatory, quasi-academic approach based on the work of Francis Galton, cousin of Charles Darwin.

Isolationists and Segregationists picked up the thread and ran with it focused on vulnerable members of the community labeled as paupers, mentally disabled, dwarfs, promiscuous or criminal.

 In a strategy eerily reminiscent of that employed by Mississippi Pro-Life advocates in Dobbs v. Jackson Women’s Health Organization in 2021, Dr. Albert Priddy, activist director of the Virginia State Colony for Epileptics and Feebleminded, teamed up with radical Virginia state senator Aubrey Strode to hand pick and literally make a “federal case” out of a young institutionalized teen resident named Carrie Buck.

Their goal was to force the nation’s highest courts to sanction state sponsored mandated sterilization.

In a strange twist of fate, the Dobbs name was central to this case as well. That is because Carrie Buck was under the care of foster parents, John and Alice Dobbs, after Carrie’s mother, Emma, was declared mentally incompetent. At the age of 17, Carrie, after having been removed from school after the 6th grade to work as a domestic for the Dobbs, was raped by their nephew and gave birth to a daughter, Vivian. This lead to her mandated institutionalization, and subsequent official labeling as an “imbecile.” 

In his majority decision supporting Dr. Priddy, Buck v. Bell,  Supreme Court Chief Justice Oliver Wendall Holmes leaned heavily on precedent. Reflecting his extreme bias, he wrote: “The principle that supports compulsory vaccination is broad enough to cover the cutting of Fallopian tubes (Jacobson v. Massachusetts 197 US 11). Three generation of imbeciles are enough.”

Carrie Buck lived to age 76, had no mental illness, and read the Charlottesville, VA newspaper every day, cover to cover. There is no evidence that her mother Emma was mentally incompetent. Her daughter Vivian was an honor student, who died in the custody of the John and Alice Dobbs at the age of 8.

The deeply embedded roots of the prejudicial idea that inferiority (or otherness) is a biological construct was used to justify indentured servitude and enslaved Africans, and traces back to our very beginnings as a nation. Our third president, Thomas Jefferson, was not shy in declaring that his enslaved Africans were biologically distinguishable from land-holding whites. Channeling Eugenic activists a century later, the President noted his enslaved Africans suitability for brutal labor was based on their greater physiologic tolerance for plantation-level heat exposure, and lesser (required) kidney output.

Helen Burstin MD, CEO of the Council of Medical Specialty Societies, drew a direct line from those early days to the present day practice of medicine anchored in opaque decision support computerized algorithms. “It is mind-blowing in some ways how deeply embedded in history some of this misinformation is,” she said. She was talking about risk-prediction tools that are commercial and proprietary, and utilized for opaque oversight of “roughly 200 million U.S. citizens per year.” Originally designed for health insurance prior approval systems and managed care decisions, they now provide underpinning for new AI super-charged personalized medicine decision support systems.

Documented racially constructed clinical guidelines have been uncovered and some rewritten over the past few years. They include obstetrical guidelines that disadvantaged black mothers seeking vaginal birth over Caesarian Section, and limitations on treatment of black children with fever and acute urinary tract infection, as just two examples. Other studies uncovered reinforcement of myths that “black people have higher pain thresholds,” greater strength, and resistance to disease – all in support of their original usefulness as slave laborers. If racism has found a way historically to insinuate itself into these tools, is it unreasonable to believe that committed Christian Nationalists might do the same to control women’s health autonomy?

Can’t we just make a fresh start on clinical guidelines? Sadly, it is not that easy. As James Baldwin famously wrote, “People are trapped in history and history is trapped in them.” The explosion of technologic advance in health care has the potential to trap the bad with the good, as vast databases are fed into hungry machines indiscriminately. 

Computing power, genomic databases, EMR’s, natural language processing, machine based learning, generative AI, and massive multimodal downloads bury our historic biases and errors under multi-layered camouflage, and leave plenty of room for invisible inserts by the Leonard Leo’s of the world.

Modern day Dobb-ists have now targeted vulnerable women and children using carefully constructed legal cases and running them all the way up to the Supreme Court. This strategy was joined with a second approach (MAGA Republican take-over’s of state legislatures). Together they are intended to ban abortion, explore contraceptive restrictions, eliminate fertility therapy, and criminalize the practice of medicine. It is one more simple step to require encodement of these restrictions on medical freedom and autonomy into binding clinical protocols.

In an age where local bureaucrats are determined to “play doctor”, and modern day jurists are determined to provide cover for a third wave of protocol encoded Dobb-ists, “the enemy without” runs the risk of becoming “the enemy within.” 

Trump – The Modern Day Piper of Hamelin

Posted on | February 17, 2024 | 2 Comments

Mike Magee

You would need a mountain of psychiatrists to explain why Trump is the way he is, and an army of scholars to help us understand why Republican leaders, in state and federal positions, have decided to follow this piper’s call.

Which brings me to a well known parable, described below by AI enabled Bing Copilot:

“In the town of Hamelin, during the year 1284, a rat infestation plagued the streets. The townspeople were desperate for a solution. That’s when a mysterious figure appeared—a piper dressed in multicolored (“pied”) clothing. He claimed to be a rat-catcher and promised to rid the town of its vermin.

The mayor, eager to be rid of the rats, struck a deal with the piper. He pledged to pay the piper 1,000 guilders in exchange for his services. The piper accepted and began to play his magic pipe. The enchanting music drew the rats out of their hiding places, and they followed him to the River Weser. There, they drowned, leaving Hamelin rat-free.

However, when it came time to pay the piper, the mayor reneged on his promise. He reduced the payment to a mere 50 guilders and accused the piper of bringing the rats himself as part of an extortion scheme.

Enraged by this betrayal, the piper vowed revenge. On Saint John and Paul’s day, while the adults were attending church, he returned to Hamelin. This time, he was dressed in green like a hunter and played his pipe once more. The haunting melody captivated the town’s children, and 130 of them followed him out of the town and into a mysterious cave.”

In my reading of the parable, the piper is Trump, whose magical tune is addictive, infective, and destructive. He has been hired by the mayor(s), an array of powerful Republican party elite, who, after realizing they couldn’t beat the piper, assumed that they could control him, and use his magnetism to convert long term goals into short term victories. And for four years, from 2016 to 2020, they were right. But the piper’s vision and thirst for revenge, for a million slights and offenses by these fawning elites, never relinquished his dream of ultimate power. Instead, he continued to plow along with his pipe, the one that first attracted the rats (low hanging, ethically compromised, low-lifes like Reps. Chris Collins (R-N.Y.) and Duncan Hunter (R-Calif.) who would soon be indicted), and then returned in 2023, tooting the same song from the same horn, drawing Republican “children” out of their political chambers and into a mysterious cave, unreachable to support immigration reform, oppressed women, or even a beleaguered Ukraine on its soon-to-be last dying breath.

On  June 11, 1944, another presidential  piper, the supremely popular 4-term leader FDR, knew well how to use his song to bring along our citizens. On that day, he promised a “Second Bill of Rights” stating that the original was now “inadequate to assure us equality in the pursuit of happiness…Necessitous men are not free,” he said.

Harvard-trained moral philosopher Susan Neiman PhD  recalled those words recently in calling for  “a commitment to universalism over tribalism, a firm distinction between justice and power, and a belief in the possibility of progress.”

She also recalled the work product of Eleanor Roosevelt who guided the creation of the UN’s “Universal Declaration of Human Rights” which she herself admitted is “a declaration that remains aspirational.” Signed by 150 nations, it remains the most translated document in the world.

Embedded in the declaration is a broad and inclusive definition of health. It reads “a state of complete physical, mental and social well-being and not merely the absence of disease or infirmity.” Most especially, Eleanor Roosevelt highlighted that “Motherhood and childhood are entitled to special care and assistance. All children, whether born in or out of wedlock, shall enjoy the same social protection.”

And yet, as Trump blows his horn from multiple court house steps, in the post-Dobbs era, a recent March of Dimes report states that “maternal deaths are on the rise, with the rate doubling between 2018 to 2021 from 17.4 to 32.9 deaths per 100,000 live births.”

Susan Neiman sees the problem as deeply embedded in America’s culture and politics where guideposts and  philosophical values are being dismantled. Cast in this light, the failed U.S. health care system is systematically broken and highly discriminatory at best.

When the piper of Hamelin first led the rats into the river, it was alarmingly easy, and left a wide opening for his next steps. The vacuum left by an erosion of justice is always filled with power – and specifically, power over someone. The targets of Trump’s power play continue to be  women, children, and people of color in America, and the established Republican party. But in the next trip to the river, he believes his ultimate target is within his grasp – it is our Democracy.

 

“Generative Pre-Trained Transformers” (GPT): An Historical Perspective.

Posted on | February 13, 2024 | 2 Comments

Mike Magee

Over the past year, the general popularization of AI or Artificial Intelligence has captured the world’s imagination. Of course, academicians often emphasize historical context.  But entrepreneurs tend to agree with Thomas Jefferson who said, “I like dreams of the future better than the history of the past.”

This particular dream however is all about language, its standing and significance in human society. Throughout history, language has been a species accelerant, a secret power that has allowed us to dominate and rise quickly (for better or worse) to the position of “masters of the universe.”

Well before ChatGPT became a household phrase, there was LDT or the laryngeal descent theory. It professed that humans unique capacity for speech was the result of a voice box, or larynx, that is lower in the throat than other primates. This permitted the “throat shape, and motor control” to produce vowels that are the cornerstone of human speech. Speech – and therefore language arrival – was pegged to anatomical evolutionary changes dated at between 200,000 and 300,000 years ago.

That theory, as it turns out, had very little scientific evidence. And in 2019, a landmark study set about pushing the date of primate vocalization back to at least 3 to 5 million years ago. As scientists summarized it in three points: “First, even among primates, laryngeal descent is not uniquely human. Second, laryngeal descent is not required to produce contrasting formant patterns in vocalizations. Third, living nonhuman primates produce vocalizations with contrasting formant patterns.”

Language and speech in the academic world are complex fields that go beyond paleoanthropology and primatology. If you want to study speech science, you better have a working knowledge of “phonetics, anatomy, acoustics and human development” say the  experts. You could add to this “syntax, lexicon, gesture, phonological representations, syllabic organization, speech perception, and neuromuscular control.”

Professor Paul Pettitt, who makes a living at the University of Oxford interpreting ancient rock paintings in Africa and beyond, sees the birth of civilization in multimodal language terms. He says, “There is now a great deal of support for the notion that symbolic creativity was part of our cognitive repertoire as we began dispersing from Africa.  Google chair, Sundar Pichai, maintains a similarly expansive view when it comes to language. In his December 6, 2023, introduction of their ground breaking LLM (large language model), Gemini (a competitor of ChatGPT), he described the new product as “our largest and most capable AI model with natural image, audio and video understanding and mathematical reasoning.”

Digital Cognitive Strategist, Mark Minevich, echoed Google’s view that the torch of human language had now gone well beyond text alone and had been passed to machines. His review: “Gemini combines data types like never before to unlock new possibilities in machine learning… Its multimodal nature builds on, yet goes far beyond, predecessors like GPT-3.5 and GPT-4 in its ability to understand our complex world dynamically.”

GPT what???

O.K. Let’s take a step back, and give us all a chance to catch-up.

What we call AI or “artificial intelligence” is a 70-year old concept that used to be called “deep learning.” This was the brain construct of University of Chicago research scientists Warren McCullough and Walter Pitts, who developed the concept of  “neural nets” in 1944, modeling the theoretical machine learner after human brains, consistent of multiple overlapping transit fibers, joined at synaptic nodes which, with adequate stimulus could allow gathered information to pass on to the next fiber down the line.

On the strength of that concept, the two moved to MIT in 1952 and launched the Cognitive Science Department uniting computer scientists and neuroscientists. In the meantime, Frank Rosenblatt, a Cornell psychologist, invented the “first trainable neural network” in 1957 termed by him futuristically, the “Perceptron” which included a data input layer, a sandwich layer that could adjust information packets with “weights” and  “firing thresholds”, and a third output layer to allow data that met the threshold criteria to pass down the line.

Back at MIT, the Cognitive Science Department was in the process of being hijacked in 1969 by mathematicians Marvin Minsky and Seymour Papert, and became the MIT Artificial Intelligence Laboratory. They summarily trashed Rosenblatt’s Perceptron machine believing it to be underpowered and inefficient in delivering the most basic computations. By 1980, the department was ready to deliver a “never mind,” as computing power grew and algorithms for encoding thresholds and weights at neural nodes became efficient and practical.

The computing leap, experts now agree, came “courtesy of the computer-game industry” whose “graphics processing unit” (GPU), which housed thousands of processing cores on a single chip, was effectively the neural net that McCullough and Pitts had envisioned. By 1977, Atari had developed game cartridges and microprocessor-based hardware, with a successful television interface.

With the launch of the Internet, and the commercial explosion of desk top computing, language – that is the fuel for human interactions worldwide – grew exponentially in importance. More specifically, the greatest demand was for language that could link humans to machines in a natural way. 

With the explosive growth of text data, the focus initially was on Natural Language Processing (NLP), “an interdisciplinary subfield of computer science and linguistics primarily concerned with giving computers the ability to support and manipulate human language.” Training software initially used annotated or referenced texts to address or answer specific questions or tasks precisely. The usefulness and accuracy to address inquiries outside of their pre-determined training was limited and inefficiency undermined their usage. 

But computing power had now advanced far beyond what Warren McCullough and Walter Pitts could have possibly imagined in 1944, while the concept of “neural nets” couldn’t be more relevant. IBM describes the modern day version this way: 

“Neural networks …are a subset of machine learning and are at the heart of deep learning algorithms. Their name and structure are inspired by the human brain, mimicking the way that biological neurons signal to one another… Artificial neural networks are comprised of node layers, containing an input layer, one or more hidden layers, and an output layer…Once an input layer is determined, weights are assigned. These weights help determine the importance of any given variable, with larger ones contributing more significantly to the output compared to other inputs. All inputs are then multiplied by their respective weights and then summed. Afterward, the output is passed through an activation function, which determines the output. If that output exceeds a given threshold, it “fires” (or activates) the node, passing data to the next layer in the network… it’s worth noting that the “deep” in deep learning is just referring to the depth of layers in a neural network. A neural network that consists of more than three layers—which would be inclusive of the inputs and the output—can be considered a deep learning algorithm. A neural network that only has two or three layers is just a basic neural network.”

The bottom line is that the automated system responds to an internal logic. The computers “next choice” is determined by how well it fits in with the prior choices. And it doesn’t matter where the words or “coins” come from. Feed it data, and it will “train” itself; and by following the rules or algorithms imbedded in the middle decision layers or screens, it willtransform” the acquired knowledge, into generated” language that both human and machine understand.

In 2016, a group of tech entrepreneurs including Elon Musk and Reed Hastings, believing AI could go astray if restricted or weaponized, formed a non-profit called OpenAI. Two years later they released a deep learning product called Chat GPT.  This solution was born out of the marriage of Natural Language Processing and Deep Learning Neural Links with a stated goal of “enabling humans to interact with machines in a more natural way.” 

The GPT stood for “Generative Pre-trained Transformer.”  Built into the software was the ability to “consider the context of the entire sentence when generating the next word” – a tactic known as “auto-regressive.” As a “self-supervised learning model,” GPT is able to learn by itself from ingesting or inputing huge amounts of anonymous text; transform it by passing it through a variety of intermediary weighed screens that jury the content; and allow passage (and survival) of data that is validated. The resultant output? High output language that mimics human text.

Leadership in Microsoft was impressed, and in 2019 ponied up $1 billion to jointly participate in development of the product and serve as their exclusive Cloud provider.

The first GPT released by OpenAI was GPT-1 in 2018. It was trained on an enormous BooksCorpus dataset. Its’ design included an input and output layer, with 12 successive transformer layers sandwiched in between. It was so effective in Natural Language Processing that minimal fine tuning was required on the back end.

One year later, OpenAI released version two, called GPT-2, which was 10 times the size of its predecessor with 1.5 billion parameters, and the capacity to translate and summarize. A year later GPT-3 was released in 2020. It had now grown to 175 billion parameters, 100 times the size of GPT-2, and was trained by ingesting a corpus of 500 billion content sources (including those of my own book CODE BLUE). It could now generate long passages on verbal demand, do basic math, write code, and do (what the inventors describe as) “clever tasks.” An intermediate GPT 3.5 absorbed Wikipedia entries, social media posts and news releases.

On March 14, 2023, GPT-4 went big language, now with multimodal outputs including text, speech, images, and physical interactions with the environment. This represents an exponential convergence of multiple technologies including databases, AI, Cloud Computing, 5G networks, personal Edge Computing, and more.

 The New York Times headline announced it as “Exciting and Scary.”  Their technology columnist wrote, “What we see emerging are machines that know how to reason, are adept at all human languages, and are able to perceive and interact with the physical environment.” He was not alone in his concerns. The Atlantic, at about the same time , ran an editorial titled, “AI is about to make social media (much) more toxic.

 Leonid Zhukov, Ph.D, director of the Boston Consulting Group’s (BCG) Global AI Institute, believes  offerings like ChatGPT-4 and Genesis have the potential to become the brains of autonomous agents—which don’t just sense but also act on their environment—in the next 3 to 5 years. This could pave the way for fully automated workflows.”

Were he alive, Leonardo da Vinci, would likely be unconcerned. Five hundred years ago, he wrote nonchalantly, “It had long come to my attention that people of accomplishment rarely sat back and let things happen to them. They went out and happened to things.”

2024 Word of the Year

Posted on | February 5, 2024 | 4 Comments

 

Mike Magee

Not surprisingly, Hcom’s nominee for “word of the year” involves AI, and specifically “the language of human biology.”

As Eliezer Yudkowski, the founder of the Machine Intelligence Research Institute and coiner of the term “friendly AI” stated in Forbes:

“Anything that could give rise to smarter-than-human intelligence—in the form of Artificial Intelligence, brain-computer interfaces, or neuroscience-based human intelligence enhancement – wins hands down beyond contest as doing the most to change the world. Nothing else is even in the same league.” 

Perhaps the simplest way to begin is to say that “missense” is a form of misspeak  or expressing oneself in words “incorrectly or imperfectly.” But in the case of “missense”, the language is not made of words, where (for example) the meaning of a sentence would be disrupted by misspelling or choosing the wrong word.

With “missense”, we’re talking about a different language – the language of  DNA and proteins. Specifically, the focus in on how the four base units or nucleotides that provide the skeleton of a strand of DNA communicate instructions for each of the 20 different amino acids in the form of 3 “letter” codes or “codons.”

In this protein language, there are four nucleotides. Each “nucleotide” (adenine, quinine, cytosine, thymine) is a 3-part molecule which includes a nuclease, a 5-carbon sugar and a phosphate group. The four nucleotides unique chemical structures are designed to create two “base-pairs.” Adenine links to Thymine through a double hydrogen bond, and Cytosine links to Guanine through a triple hydrogen bond.

A-T and C-G bonds effectively “reach across” two strands of DNA to connect them in the familiar “double-helix” structure. The strands gain length by using their sugar and phosphate  molecules on the top and bottom of each nucleoside to join to each other, increasing the strands length.

The A’s and T’s and C’s and G’s are the starting points of a code. A string of three, for example A-T-G is called a “codon”, which in this case stands for one of the 20 amino acids common to all life forms, Methionine. There are 64 different codons – 61 direct the chain addition of one of the 20 amino acids (some have duplicates), and the remaining 3 codons serve  as “stop codons” to end a protein chain.

Messenger RNA (mRNA) carries a mirror image of the coded nucleotide base string from the cell nucleus to ribosomes out in the cytoplasm of the cell. Codons then call up each amino acid, which when linked together, form the protein. The protein’s structure is defined by the specific amino acids included and their order of appearance. Protein chains fold spontaneously, and in the process form a 3-dimensional structure that effects their biologic functions.

A mistake in a single letter of a codon can result in a mistaken message or “missense.” In 2018, Alphabet (formerly Google) released AlphaFold, an artificial intelligence system able to predict protein structure from DNA codon databases, with the promise of accelerating drug discovery. Five years later, the company released AlphaMissense, mining AlphaFold databases, to learn the new “protein language” as with the large language model (LLM) product ChatGPT. The ultimate goal:  to predict where “disease-causing mutations are likely to occur.”

A work in progress, AlphaMissense has already created a catalogue of possible human missense mutations, declaring 57% to have no harmful effect, and 32% possibly linked to (still to be determined) human pathology. The company has open sourced much of its database, and hopes it will accelerate the “analyzes of the effects of DNA mutations and…the research into rare diseases.”

The numbers are not small. Believe it or not, AI says the 46 chromosome human genome theoretically harbors 71 million possible missense events waiting to happen. Up to now, they’ve identified only 4 million. For humans today, the average genome includes only 9000 of these mistakes, most of which have no bearing on life or limb.

But occasionally they do. Take for example Sickle Cell Anemia. The painful and life limiting condition is the result of a single codon mistake (GTG instead of GAG) on the nucleoside chain coded to create the protein hemoglobin. That tiny error causes the 6th amino acid in the evolving hemoglobin chain, glutamic acid, to be substituted with the amino acid valine. Knowing this, investigators have now used the gene-editing tool CRISPR (a winner of the Nobel Prize in Chemistry in 2020) to correct the mistake through autologous stem cell therapy.

As Michigan State University physicist Stephen Hsu said, “The goal here is, you give me a change to a protein, and instead of predicting the protein shape, I tell you: Is this bad for the human that has it? Most of these flips, we just have no idea whether they cause sickness.”

Patrick Malone, a physician researcher at KdT ventures, sees AI on the march. He says, this is “an example of one of the most important recent methodological developments in AI. The concept is that the fine-tuned AI is able to leverage prior learning. The pre-training framework is especially useful in computational biology, where we are often limited by access to data at sufficient scale.” 

AlphaMissense creators believe their predictions may:

“Illuminate the molecular effects of variants on protein function.”

“Contribute to the identification of pathogenic missense mutations and previously unknown disease-causing genes.”

“Increase the diagnostic yield of rare genetic diseases.”

And of course, this cautionary note: The growing capacity to define and create life carries with it the potential to alter life. Which is to say, what we create will eventually change who we are, and how we behave toward each other. And when this inevitably occurs, who will burden the liability, and are lawyers prepared to argue the fallout of “missense” in the courtroom?

Brawn to Brain to Health: A Virtuous Cycle.

Posted on | February 1, 2024 | 3 Comments

Mike Magee

I’ve been thinking a lot lately about the future of work. My current obsession relates back to the accelerating forces of human isolation and shifts in the delivery of health care brought on by the Covid pandemic, and the subsequent explosion of health tech opportunists ready to bridge the geographic gap between health care supply and health care (especially mental health) demand.

For historians, work defined by geography and technology has always been a fertile and determinative area of study. This is as timely today as it was in our distant past. As one report put it, “Recent advances in artificial intelligence and robotics have generated a robust debate about the future of work. An analogous debate occurred in the late nineteenth century when mechanization first transformed manufacturing.”

As one 2021 study by the National Bureau of Economic Research (NBER) recently stated, “The story of nineteenth century development in the United States is one of dynamic tension between extensive growth as the country was settled by more people, bringing more land and resources into production, and the intensive growth from enhancing the productivity of specific locations.”

The researchers specific interest lay at the intersection of industrialization and urbanization, mutually reinforcing trends.

Consider these points:

Our first industrial revolution was “predominantly rural” with 83% of the 1800 labor force involved with agriculture, producing goods for personal, and at times local market consumption.

The few products that were exported far and wide at the time – cotton and tobacco – relied on slave labor to be profitable.

Manufacturing in 1800 was primarily home-based due to most families lack of resources to buy expensive goods, and the distances to traverse in order to access these poorly supplied market places.

The U.S. population was scarcely 5 million in 1800, occupying 860,000 square miles – that’s roughly 6 humans per square mile. And that was before we added the Louisiana Purchase in 1804 which doubled our land holdings while expanding our population numbers by 1/3. The net effect was to further diminish human presence to roughly 4 individuals per square mile.

Concentrations of humans were few and far between in this vast new world. In 1800, only 33 communities had populations of 2,500 or more individuals, representing 6% of our total population at the time.

Transportation into and out of these centers mainly utilized waterways including the Eastern seaboard and internal waterways as much as possible. This was in recognition that roads were primitive and shipping goods by horse and wagon was slow (a horse could generally travel 25 miles a day), and expensive (wagon shipment in 1816 added several days and cost 70 cents per ton mile.)

But by 1900, the U.S. labor force was only 40% agricultural. Four in 10 Americans now lived in cities with 2,500 or more inhabitants, and 25% of Americans lived in the growing nation’s 100 largest cities. 

This radical shift was a function of our Second Industrial Revolution, which had begun a century earlier in Britain, with a 50 year lag time in America. But once we got going in the post Civil War period, we exploded. In fact labor productivity lapped twice over Britain’s performance by 1900.

Our growth was fueled by transformative transportation technology and “inanimate source” (non-water powered) energy.

In the beginning of the 19th century, what manufacturing that did occur was almost always situated next to sources of natural water flow. The rivers and streams drove water wheels and later turbine engines. But this dependency lessened with the invention of the steam engine. Coal and wood powered burners could then create steam (multiplying the power of water several times) to drive engines. The choices for population centers now had widened.

At least as important was the creation of a national rail network that had begun in 1840. This transformed market networks, increasing both supply and demand. The presence of rail transport decreased the cost of shipping by 80% seemingly overnight, and incentivized urbanization.

Within a short period of time, self-reliant home manufacturing couldn’t compete with urban “machine labor.” Those machines were now powered not by waterpower but “inanimate power” (steam and eventually electricity). Mechanized factories were filled with newly arrived immigrants and freed slaves engaged in the “Great Migration” northward. As numbers of factory workforce grew, so did specialization of tasks and occupation titles. The net effect was quicker production (7 times quicker than none-machine labor).

Even before the information revolution, the internet, telemedicine, and pandemic driven nesting, all of these 20th century trends had begun to flatten. The linkages between transportation, urbanization, and market supply were being delinked. Why? 

According to the experts, “Over the twentieth century new forces emerged that decoupled manufacturing and cities. The spread of automobiles, trucks, and good roads, the adoption of electrical power, and the mechanization of farming are thought to have encouraged the decentralization of manufacturing activity.”

What can we learn from all this? 

First, innovation and technology stoke change, and nothing is permanent.

Second, markets shape human preferences, and vice versa.

Third, in the end, when it comes to the human species, self-interest and health wins out. 

Or as  Dora Costa PhD, Professor of Economics at UCLA puts it:  

“Health improvements were not a precondition for modern economic growth. The gains to health are largest when the economy has moved from ‘brawn’ to ‘brains’ because this is when the wage returns to education are high, leading the healthy to obtain more education. More education may improve use of health knowledge, producing a virtuous cycle.”

Opposing “Obamacare” Is Political Suicide For Downstream Republicans.

Posted on | January 30, 2024 | 1 Comment

Mike Magee

Despite Trump’s recent renewed pledge to once again take on Obamacare, most Republican leaders understand that opposing the increasingly popular program is political suicide. Experts have repeatedly advised the opposite, as we move slowly and incrementally toward “universalism in conjunction with simple source funding,”

A brief summary of the past history helps refresh our collective memories on the road we’ve traveled.

Medicare (a federal health insurance program covering all citizens over age 65) and Medicaid (a state and federal health insurance for poor and disabled citizens) date back to the original LBJ legislation in 1965. President Johnson had intended that the programs would be standard fare in all states. But to achieve passage, Democrats agreed to make the federal/state Medicaid program voluntary, and allow states to determine the details, such as income eligibility limits and work requirements. Medicare became the law of the land immediately, and Medicaid in some form was active in all states by 1982.

In 2010, as part of the Affordable Care Act (popularly termed “Obamacare”), President Obama included an expansion of Medicaid with conditions – that all citizens up to 138% of the federal poverty limit be eligible. In return, the added cost to the states would be paid for with federal subsidies of 100% until 2020 when they would become 90%. Under the original proposed legislation, states with diminished benefits and restrictions would be forced to comply with the new rules or lose their existing federal funding under Medicaid.

In 2012, 26 Attorney Generals from Republican led states filed a lawsuit to challenge Obamacare on two counts in an attempt to collapse the entire program. First, the “individual mandate” (an annual charge or tax on those who did not have health insurance) was targeted. Second, they attacked the constitutionality of the Medicaid extension.

The Affordable Care Act’s (ACA) mandate was an original component of Governor Mitt Romney’s Massachusetts law designed to insure that all citizens and organizations would participate and contribute to even risk-sharing. In the federal bill the mandate was the  “stick” to counterbalance the various “carrots” of premium subsidies.

The petition against the ACA mandate became part of the landmark case – National Federation of Independent Business v. Sebelius, 567 U.S. 519 (2012). The argument for repeal of the mandate was based on the fact that the administration had justified the mandate as constitutional based on the Article 1 Section 8 – the Commerce Clause or Necessary and Proper Clause. 

On June 28, 2012, Chief Justice Roberts disappointed fellow Republicans with a complex decision that split the difference.

As he stated in his closing: “The Affordable Care Act is constitutional in part and unconstitutional in part. The individual mandate cannot be upheld as an exercise of Congress’s power under the Commerce Clause. That Clause authorizes Congress to regulate interstate commerce, not to order individuals to engage in it. In this case, however, it is reasonable to construe what Congress has done as increasing taxes on those who have a certain amount of income, but choose to go without health insurance.  Such legislation is within Congress’s power to tax.”

Roberts did, however,  support Republicans on a their second issue. The Affordable Care Act had mandated that all states expand eligibility to Medicaid in return for a federal subsidy of 100% of the added expense until 2020, or risk loss of all existing federal Medicaid government funding.

The Court’s ruling : “As for the Medicaid expansion, that portion of the Affordable Care Act violates the Constitution by threatening existing Medicaid funding. Congress has no authority to order the States to regulate according to its instructions. Congress may offer the States grants and require the States to comply with accompanying conditions, but the States must have a genuine choice whether to accept the offer.”

As a result, states would have to voluntarily select Medicaid expansion under the ACA. Over the next decade, 40 states have signed up (the last being North Carolina in 2023), while 10 have not. The impact on uninsured numbers was almost immediate. States participating saw their uninsured rates drop and preventive health measures rise. States who chose not to participate and stayed with financial eligibility that was on average at 40% of poverty levels (rather than at 138%) lagged far behind on all measures.

For states opposing Medicaid expansion with federal subsidies, the decision  proved costly. They were shown to spend more out of state coffers to support uncompensated ER care for their uninsured than the 10% contribution required after 2020. Yet they stubbornly held out in the hope of denying Democrats further victory as they celebrated states rights at a huge financial and wellness cost (higher mortality rates) to their citizens. The cost in dollars alone is increasingly difficult to justify. For example, Florida’s stubborn resistance left 1 million Floridians out in the cold, and cost the state $5.6 billion in immediate federal aid, and an additional $4.4 billion annually. 

Over the past two decades, the numbers of citizens covered by Medicaid have grown substantially, from 40 million to over 90 million. The pandemic reinforced the critical role that Medicaid played in assuring Americans health coverage. On April 1, 2020, Biden signed the Families First Coronavirus Response Act which bumped up federal Medicaid subsidies for two years by 6.2% in return for a mandated freeze of Medicaid enrollee status in all states.

Medicaid numbers increased from 71 to 94 million during this period. That Act sunsetted on March 31, 2023, allowing states to disenroll citizens deemed ineligible. This “unwinding “ is viewed as potentially destabilizing, and states from Missouri to Texas, and Tennessee to Idaho, have been accused of shady practices in trimming their Medicaid rolls. 

The ACA of course was not restricted to Medicaid enrollees. As part of the legislation, the Act also created subsidized “insurance marketplaces” nationwide, serviced by federally funded “navigators,” and made children eligible for parent’s insurance up to age 26, and prohibited insurers from disallowing coverage based on prexisting conditions. The popularity of these provisions contributed to Trump’s failure in 2017 to eliminate the ACA, with Senator McCann’s dramatic “thumb down.” Had Trump been successful, it is estimated that the number of uninsured would have increased by 32 million.

The pandemic and inflationary pressures allowed President Biden to expand the effectiveness of these markets. Increased federal subsides have lowered the cost to eligible consumers, as well as access, with eligibility now extended up to four times the federal poverty limit, or $120,000 a year for a family of four.

During the Trump years, from 2016 to 2020, enrollment in the health exchanges dropped 10% to 11.4 million as marketing and the use of navigators were all but eliminated. But in the following four years under Biden, enrollment skyrocketed by 87% to 21.3 million participants.That included 3 1/2 million Texans, and over 4 million Floridians.

Thirty-two states allowed the national HealthCare.gov to be their agent in the transactions, while 18 states chose to field their own websites.

If Obamacare was successful and popular, Bidencare is even more so. The original plan was plagued by an “ACA gap.” Medicaid for poor and disabled was hampered by deliberate state restrictions on access and work requirements, and Health Exchanges were underfunded with insufficient subsides to benefit middle income Americans. As a result, the poor were often underserved. And those with marginal incomes still made too much to qualify for ACA subsidies. By expanding Medicaid access and funding, while increasing eligibility to health exchange subsidies to 150% of poverty, President Biden has all but eliminated that gap, and enrollment has exploded.

So as Trump adds “eliminating Obamacare” to his campaign wish list, down stream Republicans best hope he’s just pulling their legs, and will soon go silent on the issue.

“Silence Is So Accurate” In A Hyperkinetic AI World.

Posted on | January 23, 2024 | 2 Comments

Mike Magee

“I mean, people keep saying in these troubled moments, in these troubled moments. It seems like we’re always in a troubled moment, perhaps this one even more so than usual…But the artwork is a great conduit to feeling that renewed hope for what the human being is capable of.” 

Those were the words of Christopher Rothko, son of world renowned artist, Mark Rothko. They were spoken last month, at the National Gallery of Art (NGA), on Constitution Avenue in Washington, D.C. before a live audience of 1000, and countless others via live stream video.

Chris was joined that day by his sister, Kate Rothko Prizel, gallerist Arne Glimcher, and NGA curator, Adam Greenhalgh, who moderated the hour long panel discussion celebrating the opening of the exhibition, “Mark Rothko: Paintings on Paper.” My viewership owed to the fact that I believe in the transcendent power of art, and like many of you, am searching for answers, for solutions, during “these troubled moments.”

Mark Rothko’s official bio has, on the surface, a certain currency today – (Russia, Zionist, immigrant, Ivy League, NYC elite, educator of children). The first paragraph reads, “Mark Rothko was born Marcus Rothkowitz in Dvinsk, Russia, on September 25, 1903. His parents were Jacob and Anna Goldin Rothkowitz, and Rothko was raised in a well-educated family with Zionist leanings. At the age of ten, Rothko and his mother and sister immigrated to America to join his father and brothers… From 1921 to 1923 Rothko attended Yale University on a full scholarship and then moved to New York City. In 1924 he enrolled in the Art Students League…In 1929 Rothko began teaching children at the Center Academy of the Brooklyn Jewish Center, a position he retained for more than twenty years.”

NGA correctly labels him a ”world-renowned painter” who most admire for his “monumental soft-edged rectangular field” paintings that radiate with color. But experts like Michael Andor Brodeur, classical music critic at the Washington Post, emphasize his deep connection to Mozart, and Rothko’s much quoted statement, “I became a painter because I wanted to raise painting to the level of poignancy of music and poetry.”

Most agree that Mark Rothko was deeply contemplative. Or in Brodeur’s words, created “diffusely defined panels of abutting colors — their most subdued hues ignited into a strange glow, their uncanny depth achieved through layers of paint and pigment, their silence overtaking every room they occupied… though they’re heavy with silence, it’s undeniable that his paintings also contain music.”

Rothko put it more simple: “Silence is so accurate.” Perhaps his most famous quote.

Viewers of Rothko are not passive. Many accounts document visitors “bursting into tears.”  NGA curator and Rothko expert, Adam Greenhalgh, explained the painter’s intent to “smile through tears” this way: “Rothko hoped his paintings conveyed basic human emotions — tragedy, ecstasy, doom — and that comes from tragedy…”

During his early years, the years Rothko was teaching art to children in New York City, he was working on a manuscript focused on what was the role of the artist in society. In a book about his father, written with his sister, Kate, Chris recounts that his father “sees the artist as someone who is almost like a soothsayer, someone whose responsibility is to some degree our conscience, but certainly to awaken us to all the ideas.” But his father never finished the book, according to his son, because “he realizes that he can actually paint that better than he’s writing it.”

Once his painting was “written,” Rothko worked hard to ensure that the viewer would be able to “read it” correctly. He insisted his monumental paintings be hung only 30 centimeters above the floor, so that you would experience a face-to-face immersion. Galleries were repainted in a light gray with brownish tones which the artist believed best completed his creations. And spot lighting the works was prohibited, as were stanchions that separated the visitors from the works. The works were grouped together in large space galleries, with benches if possible, to encourage long contemplation.

One could easily argue that our hyper-kinetic world, so distracted and anxious and fearful, a world where common ground escapes us, and health – in mind, body, and spirit – is elusive, needs other ways to reach out and touch, other ways to communicate.

On his father’s purpose, son Chris reflects, “he sets up the artist as sort of historically someone who’s not understood or discounted, but in fact, might have some things to say to us in a language that maybe isn’t the one that we speak all the time, but maybe goes a little deeper.”

If you happen to be in Washington in the next two months, and you have a few hours to burn, may I suggest a visit to the National Gallery of Art. The Rothko exhibit runs to March 31st. If not, buy or loan a copy of Chris and Kate’s book about their father (Mark Rothko), and set aside an hour to listen to “Mark Rothko: Insights from Arne Glimcher and the Rothko Family”, a remarkable conversation that highlights America’s complexity, strength, majesty, beauty and promise as we negotiate carefully the year – 2024.

« go backkeep looking »

Show Buttons
Hide Buttons