HealthCommentary

Exploring Human Potential

“Generative Pre-Trained Transformers” (GPT): An Historical Perspective.

Posted on | February 13, 2024 | 2 Comments

Mike Magee

Over the past year, the general popularization of AI or Artificial Intelligence has captured the world’s imagination. Of course, academicians often emphasize historical context.  But entrepreneurs tend to agree with Thomas Jefferson who said, “I like dreams of the future better than the history of the past.”

This particular dream however is all about language, its standing and significance in human society. Throughout history, language has been a species accelerant, a secret power that has allowed us to dominate and rise quickly (for better or worse) to the position of “masters of the universe.”

Well before ChatGPT became a household phrase, there was LDT or the laryngeal descent theory. It professed that humans unique capacity for speech was the result of a voice box, or larynx, that is lower in the throat than other primates. This permitted the “throat shape, and motor control” to produce vowels that are the cornerstone of human speech. Speech – and therefore language arrival – was pegged to anatomical evolutionary changes dated at between 200,000 and 300,000 years ago.

That theory, as it turns out, had very little scientific evidence. And in 2019, a landmark study set about pushing the date of primate vocalization back to at least 3 to 5 million years ago. As scientists summarized it in three points: “First, even among primates, laryngeal descent is not uniquely human. Second, laryngeal descent is not required to produce contrasting formant patterns in vocalizations. Third, living nonhuman primates produce vocalizations with contrasting formant patterns.”

Language and speech in the academic world are complex fields that go beyond paleoanthropology and primatology. If you want to study speech science, you better have a working knowledge of “phonetics, anatomy, acoustics and human development” say the  experts. You could add to this “syntax, lexicon, gesture, phonological representations, syllabic organization, speech perception, and neuromuscular control.”

Professor Paul Pettitt, who makes a living at the University of Oxford interpreting ancient rock paintings in Africa and beyond, sees the birth of civilization in multimodal language terms. He says, “There is now a great deal of support for the notion that symbolic creativity was part of our cognitive repertoire as we began dispersing from Africa.  Google chair, Sundar Pichai, maintains a similarly expansive view when it comes to language. In his December 6, 2023, introduction of their ground breaking LLM (large language model), Gemini (a competitor of ChatGPT), he described the new product as “our largest and most capable AI model with natural image, audio and video understanding and mathematical reasoning.”

Digital Cognitive Strategist, Mark Minevich, echoed Google’s view that the torch of human language had now gone well beyond text alone and had been passed to machines. His review: “Gemini combines data types like never before to unlock new possibilities in machine learning… Its multimodal nature builds on, yet goes far beyond, predecessors like GPT-3.5 and GPT-4 in its ability to understand our complex world dynamically.”

GPT what???

O.K. Let’s take a step back, and give us all a chance to catch-up.

What we call AI or “artificial intelligence” is a 70-year old concept that used to be called “deep learning.” This was the brain construct of University of Chicago research scientists Warren McCullough and Walter Pitts, who developed the concept of  “neural nets” in 1944, modeling the theoretical machine learner after human brains, consistent of multiple overlapping transit fibers, joined at synaptic nodes which, with adequate stimulus could allow gathered information to pass on to the next fiber down the line.

On the strength of that concept, the two moved to MIT in 1952 and launched the Cognitive Science Department uniting computer scientists and neuroscientists. In the meantime, Frank Rosenblatt, a Cornell psychologist, invented the “first trainable neural network” in 1957 termed by him futuristically, the “Perceptron” which included a data input layer, a sandwich layer that could adjust information packets with “weights” and  “firing thresholds”, and a third output layer to allow data that met the threshold criteria to pass down the line.

Back at MIT, the Cognitive Science Department was in the process of being hijacked in 1969 by mathematicians Marvin Minsky and Seymour Papert, and became the MIT Artificial Intelligence Laboratory. They summarily trashed Rosenblatt’s Perceptron machine believing it to be underpowered and inefficient in delivering the most basic computations. By 1980, the department was ready to deliver a “never mind,” as computing power grew and algorithms for encoding thresholds and weights at neural nodes became efficient and practical.

The computing leap, experts now agree, came “courtesy of the computer-game industry” whose “graphics processing unit” (GPU), which housed thousands of processing cores on a single chip, was effectively the neural net that McCullough and Pitts had envisioned. By 1977, Atari had developed game cartridges and microprocessor-based hardware, with a successful television interface.

With the launch of the Internet, and the commercial explosion of desk top computing, language – that is the fuel for human interactions worldwide – grew exponentially in importance. More specifically, the greatest demand was for language that could link humans to machines in a natural way. 

With the explosive growth of text data, the focus initially was on Natural Language Processing (NLP), “an interdisciplinary subfield of computer science and linguistics primarily concerned with giving computers the ability to support and manipulate human language.” Training software initially used annotated or referenced texts to address or answer specific questions or tasks precisely. The usefulness and accuracy to address inquiries outside of their pre-determined training was limited and inefficiency undermined their usage. 

But computing power had now advanced far beyond what Warren McCullough and Walter Pitts could have possibly imagined in 1944, while the concept of “neural nets” couldn’t be more relevant. IBM describes the modern day version this way: 

“Neural networks …are a subset of machine learning and are at the heart of deep learning algorithms. Their name and structure are inspired by the human brain, mimicking the way that biological neurons signal to one another… Artificial neural networks are comprised of node layers, containing an input layer, one or more hidden layers, and an output layer…Once an input layer is determined, weights are assigned. These weights help determine the importance of any given variable, with larger ones contributing more significantly to the output compared to other inputs. All inputs are then multiplied by their respective weights and then summed. Afterward, the output is passed through an activation function, which determines the output. If that output exceeds a given threshold, it “fires” (or activates) the node, passing data to the next layer in the network… it’s worth noting that the “deep” in deep learning is just referring to the depth of layers in a neural network. A neural network that consists of more than three layers—which would be inclusive of the inputs and the output—can be considered a deep learning algorithm. A neural network that only has two or three layers is just a basic neural network.”

The bottom line is that the automated system responds to an internal logic. The computers “next choice” is determined by how well it fits in with the prior choices. And it doesn’t matter where the words or “coins” come from. Feed it data, and it will “train” itself; and by following the rules or algorithms imbedded in the middle decision layers or screens, it willtransform” the acquired knowledge, into generated” language that both human and machine understand.

In 2016, a group of tech entrepreneurs including Elon Musk and Reed Hastings, believing AI could go astray if restricted or weaponized, formed a non-profit called OpenAI. Two years later they released a deep learning product called Chat GPT.  This solution was born out of the marriage of Natural Language Processing and Deep Learning Neural Links with a stated goal of “enabling humans to interact with machines in a more natural way.” 

The GPT stood for “Generative Pre-trained Transformer.”  Built into the software was the ability to “consider the context of the entire sentence when generating the next word” – a tactic known as “auto-regressive.” As a “self-supervised learning model,” GPT is able to learn by itself from ingesting or inputing huge amounts of anonymous text; transform it by passing it through a variety of intermediary weighed screens that jury the content; and allow passage (and survival) of data that is validated. The resultant output? High output language that mimics human text.

Leadership in Microsoft was impressed, and in 2019 ponied up $1 billion to jointly participate in development of the product and serve as their exclusive Cloud provider.

The first GPT released by OpenAI was GPT-1 in 2018. It was trained on an enormous BooksCorpus dataset. Its’ design included an input and output layer, with 12 successive transformer layers sandwiched in between. It was so effective in Natural Language Processing that minimal fine tuning was required on the back end.

One year later, OpenAI released version two, called GPT-2, which was 10 times the size of its predecessor with 1.5 billion parameters, and the capacity to translate and summarize. A year later GPT-3 was released in 2020. It had now grown to 175 billion parameters, 100 times the size of GPT-2, and was trained by ingesting a corpus of 500 billion content sources (including those of my own book CODE BLUE). It could now generate long passages on verbal demand, do basic math, write code, and do (what the inventors describe as) “clever tasks.” An intermediate GPT 3.5 absorbed Wikipedia entries, social media posts and news releases.

On March 14, 2023, GPT-4 went big language, now with multimodal outputs including text, speech, images, and physical interactions with the environment. This represents an exponential convergence of multiple technologies including databases, AI, Cloud Computing, 5G networks, personal Edge Computing, and more.

 The New York Times headline announced it as “Exciting and Scary.”  Their technology columnist wrote, “What we see emerging are machines that know how to reason, are adept at all human languages, and are able to perceive and interact with the physical environment.” He was not alone in his concerns. The Atlantic, at about the same time , ran an editorial titled, “AI is about to make social media (much) more toxic.

 Leonid Zhukov, Ph.D, director of the Boston Consulting Group’s (BCG) Global AI Institute, believes  offerings like ChatGPT-4 and Genesis have the potential to become the brains of autonomous agents—which don’t just sense but also act on their environment—in the next 3 to 5 years. This could pave the way for fully automated workflows.”

Were he alive, Leonardo da Vinci, would likely be unconcerned. Five hundred years ago, he wrote nonchalantly, “It had long come to my attention that people of accomplishment rarely sat back and let things happen to them. They went out and happened to things.”

2024 Word of the Year

Posted on | February 5, 2024 | 4 Comments

 

Mike Magee

Not surprisingly, Hcom’s nominee for “word of the year” involves AI, and specifically “the language of human biology.”

As Eliezer Yudkowski, the founder of the Machine Intelligence Research Institute and coiner of the term “friendly AI” stated in Forbes:

“Anything that could give rise to smarter-than-human intelligence—in the form of Artificial Intelligence, brain-computer interfaces, or neuroscience-based human intelligence enhancement – wins hands down beyond contest as doing the most to change the world. Nothing else is even in the same league.” 

Perhaps the simplest way to begin is to say that “missense” is a form of misspeak  or expressing oneself in words “incorrectly or imperfectly.” But in the case of “missense”, the language is not made of words, where (for example) the meaning of a sentence would be disrupted by misspelling or choosing the wrong word.

With “missense”, we’re talking about a different language – the language of  DNA and proteins. Specifically, the focus in on how the four base units or nucleotides that provide the skeleton of a strand of DNA communicate instructions for each of the 20 different amino acids in the form of 3 “letter” codes or “codons.”

In this protein language, there are four nucleotides. Each “nucleotide” (adenine, quinine, cytosine, thymine) is a 3-part molecule which includes a nuclease, a 5-carbon sugar and a phosphate group. The four nucleotides unique chemical structures are designed to create two “base-pairs.” Adenine links to Thymine through a double hydrogen bond, and Cytosine links to Guanine through a triple hydrogen bond.

A-T and C-G bonds effectively “reach across” two strands of DNA to connect them in the familiar “double-helix” structure. The strands gain length by using their sugar and phosphate  molecules on the top and bottom of each nucleoside to join to each other, increasing the strands length.

The A’s and T’s and C’s and G’s are the starting points of a code. A string of three, for example A-T-G is called a “codon”, which in this case stands for one of the 20 amino acids common to all life forms, Methionine. There are 64 different codons – 61 direct the chain addition of one of the 20 amino acids (some have duplicates), and the remaining 3 codons serve  as “stop codons” to end a protein chain.

Messenger RNA (mRNA) carries a mirror image of the coded nucleotide base string from the cell nucleus to ribosomes out in the cytoplasm of the cell. Codons then call up each amino acid, which when linked together, form the protein. The protein’s structure is defined by the specific amino acids included and their order of appearance. Protein chains fold spontaneously, and in the process form a 3-dimensional structure that effects their biologic functions.

A mistake in a single letter of a codon can result in a mistaken message or “missense.” In 2018, Alphabet (formerly Google) released AlphaFold, an artificial intelligence system able to predict protein structure from DNA codon databases, with the promise of accelerating drug discovery. Five years later, the company released AlphaMissense, mining AlphaFold databases, to learn the new “protein language” as with the large language model (LLM) product ChatGPT. The ultimate goal:  to predict where “disease-causing mutations are likely to occur.”

A work in progress, AlphaMissense has already created a catalogue of possible human missense mutations, declaring 57% to have no harmful effect, and 32% possibly linked to (still to be determined) human pathology. The company has open sourced much of its database, and hopes it will accelerate the “analyzes of the effects of DNA mutations and…the research into rare diseases.”

The numbers are not small. Believe it or not, AI says the 46 chromosome human genome theoretically harbors 71 million possible missense events waiting to happen. Up to now, they’ve identified only 4 million. For humans today, the average genome includes only 9000 of these mistakes, most of which have no bearing on life or limb.

But occasionally they do. Take for example Sickle Cell Anemia. The painful and life limiting condition is the result of a single codon mistake (GTG instead of GAG) on the nucleoside chain coded to create the protein hemoglobin. That tiny error causes the 6th amino acid in the evolving hemoglobin chain, glutamic acid, to be substituted with the amino acid valine. Knowing this, investigators have now used the gene-editing tool CRISPR (a winner of the Nobel Prize in Chemistry in 2020) to correct the mistake through autologous stem cell therapy.

As Michigan State University physicist Stephen Hsu said, “The goal here is, you give me a change to a protein, and instead of predicting the protein shape, I tell you: Is this bad for the human that has it? Most of these flips, we just have no idea whether they cause sickness.”

Patrick Malone, a physician researcher at KdT ventures, sees AI on the march. He says, this is “an example of one of the most important recent methodological developments in AI. The concept is that the fine-tuned AI is able to leverage prior learning. The pre-training framework is especially useful in computational biology, where we are often limited by access to data at sufficient scale.” 

AlphaMissense creators believe their predictions may:

“Illuminate the molecular effects of variants on protein function.”

“Contribute to the identification of pathogenic missense mutations and previously unknown disease-causing genes.”

“Increase the diagnostic yield of rare genetic diseases.”

And of course, this cautionary note: The growing capacity to define and create life carries with it the potential to alter life. Which is to say, what we create will eventually change who we are, and how we behave toward each other. And when this inevitably occurs, who will burden the liability, and are lawyers prepared to argue the fallout of “missense” in the courtroom?

Brawn to Brain to Health: A Virtuous Cycle.

Posted on | February 1, 2024 | 3 Comments

Mike Magee

I’ve been thinking a lot lately about the future of work. My current obsession relates back to the accelerating forces of human isolation and shifts in the delivery of health care brought on by the Covid pandemic, and the subsequent explosion of health tech opportunists ready to bridge the geographic gap between health care supply and health care (especially mental health) demand.

For historians, work defined by geography and technology has always been a fertile and determinative area of study. This is as timely today as it was in our distant past. As one report put it, “Recent advances in artificial intelligence and robotics have generated a robust debate about the future of work. An analogous debate occurred in the late nineteenth century when mechanization first transformed manufacturing.”

As one 2021 study by the National Bureau of Economic Research (NBER) recently stated, “The story of nineteenth century development in the United States is one of dynamic tension between extensive growth as the country was settled by more people, bringing more land and resources into production, and the intensive growth from enhancing the productivity of specific locations.”

The researchers specific interest lay at the intersection of industrialization and urbanization, mutually reinforcing trends.

Consider these points:

Our first industrial revolution was “predominantly rural” with 83% of the 1800 labor force involved with agriculture, producing goods for personal, and at times local market consumption.

The few products that were exported far and wide at the time – cotton and tobacco – relied on slave labor to be profitable.

Manufacturing in 1800 was primarily home-based due to most families lack of resources to buy expensive goods, and the distances to traverse in order to access these poorly supplied market places.

The U.S. population was scarcely 5 million in 1800, occupying 860,000 square miles – that’s roughly 6 humans per square mile. And that was before we added the Louisiana Purchase in 1804 which doubled our land holdings while expanding our population numbers by 1/3. The net effect was to further diminish human presence to roughly 4 individuals per square mile.

Concentrations of humans were few and far between in this vast new world. In 1800, only 33 communities had populations of 2,500 or more individuals, representing 6% of our total population at the time.

Transportation into and out of these centers mainly utilized waterways including the Eastern seaboard and internal waterways as much as possible. This was in recognition that roads were primitive and shipping goods by horse and wagon was slow (a horse could generally travel 25 miles a day), and expensive (wagon shipment in 1816 added several days and cost 70 cents per ton mile.)

But by 1900, the U.S. labor force was only 40% agricultural. Four in 10 Americans now lived in cities with 2,500 or more inhabitants, and 25% of Americans lived in the growing nation’s 100 largest cities. 

This radical shift was a function of our Second Industrial Revolution, which had begun a century earlier in Britain, with a 50 year lag time in America. But once we got going in the post Civil War period, we exploded. In fact labor productivity lapped twice over Britain’s performance by 1900.

Our growth was fueled by transformative transportation technology and “inanimate source” (non-water powered) energy.

In the beginning of the 19th century, what manufacturing that did occur was almost always situated next to sources of natural water flow. The rivers and streams drove water wheels and later turbine engines. But this dependency lessened with the invention of the steam engine. Coal and wood powered burners could then create steam (multiplying the power of water several times) to drive engines. The choices for population centers now had widened.

At least as important was the creation of a national rail network that had begun in 1840. This transformed market networks, increasing both supply and demand. The presence of rail transport decreased the cost of shipping by 80% seemingly overnight, and incentivized urbanization.

Within a short period of time, self-reliant home manufacturing couldn’t compete with urban “machine labor.” Those machines were now powered not by waterpower but “inanimate power” (steam and eventually electricity). Mechanized factories were filled with newly arrived immigrants and freed slaves engaged in the “Great Migration” northward. As numbers of factory workforce grew, so did specialization of tasks and occupation titles. The net effect was quicker production (7 times quicker than none-machine labor).

Even before the information revolution, the internet, telemedicine, and pandemic driven nesting, all of these 20th century trends had begun to flatten. The linkages between transportation, urbanization, and market supply were being delinked. Why? 

According to the experts, “Over the twentieth century new forces emerged that decoupled manufacturing and cities. The spread of automobiles, trucks, and good roads, the adoption of electrical power, and the mechanization of farming are thought to have encouraged the decentralization of manufacturing activity.”

What can we learn from all this? 

First, innovation and technology stoke change, and nothing is permanent.

Second, markets shape human preferences, and vice versa.

Third, in the end, when it comes to the human species, self-interest and health wins out. 

Or as  Dora Costa PhD, Professor of Economics at UCLA puts it:  

“Health improvements were not a precondition for modern economic growth. The gains to health are largest when the economy has moved from ‘brawn’ to ‘brains’ because this is when the wage returns to education are high, leading the healthy to obtain more education. More education may improve use of health knowledge, producing a virtuous cycle.”

Opposing “Obamacare” Is Political Suicide For Downstream Republicans.

Posted on | January 30, 2024 | 1 Comment

Mike Magee

Despite Trump’s recent renewed pledge to once again take on Obamacare, most Republican leaders understand that opposing the increasingly popular program is political suicide. Experts have repeatedly advised the opposite, as we move slowly and incrementally toward “universalism in conjunction with simple source funding,”

A brief summary of the past history helps refresh our collective memories on the road we’ve traveled.

Medicare (a federal health insurance program covering all citizens over age 65) and Medicaid (a state and federal health insurance for poor and disabled citizens) date back to the original LBJ legislation in 1965. President Johnson had intended that the programs would be standard fare in all states. But to achieve passage, Democrats agreed to make the federal/state Medicaid program voluntary, and allow states to determine the details, such as income eligibility limits and work requirements. Medicare became the law of the land immediately, and Medicaid in some form was active in all states by 1982.

In 2010, as part of the Affordable Care Act (popularly termed “Obamacare”), President Obama included an expansion of Medicaid with conditions – that all citizens up to 138% of the federal poverty limit be eligible. In return, the added cost to the states would be paid for with federal subsidies of 100% until 2020 when they would become 90%. Under the original proposed legislation, states with diminished benefits and restrictions would be forced to comply with the new rules or lose their existing federal funding under Medicaid.

In 2012, 26 Attorney Generals from Republican led states filed a lawsuit to challenge Obamacare on two counts in an attempt to collapse the entire program. First, the “individual mandate” (an annual charge or tax on those who did not have health insurance) was targeted. Second, they attacked the constitutionality of the Medicaid extension.

The Affordable Care Act’s (ACA) mandate was an original component of Governor Mitt Romney’s Massachusetts law designed to insure that all citizens and organizations would participate and contribute to even risk-sharing. In the federal bill the mandate was the  “stick” to counterbalance the various “carrots” of premium subsidies.

The petition against the ACA mandate became part of the landmark case – National Federation of Independent Business v. Sebelius, 567 U.S. 519 (2012). The argument for repeal of the mandate was based on the fact that the administration had justified the mandate as constitutional based on the Article 1 Section 8 – the Commerce Clause or Necessary and Proper Clause. 

On June 28, 2012, Chief Justice Roberts disappointed fellow Republicans with a complex decision that split the difference.

As he stated in his closing: “The Affordable Care Act is constitutional in part and unconstitutional in part. The individual mandate cannot be upheld as an exercise of Congress’s power under the Commerce Clause. That Clause authorizes Congress to regulate interstate commerce, not to order individuals to engage in it. In this case, however, it is reasonable to construe what Congress has done as increasing taxes on those who have a certain amount of income, but choose to go without health insurance.  Such legislation is within Congress’s power to tax.”

Roberts did, however,  support Republicans on a their second issue. The Affordable Care Act had mandated that all states expand eligibility to Medicaid in return for a federal subsidy of 100% of the added expense until 2020, or risk loss of all existing federal Medicaid government funding.

The Court’s ruling : “As for the Medicaid expansion, that portion of the Affordable Care Act violates the Constitution by threatening existing Medicaid funding. Congress has no authority to order the States to regulate according to its instructions. Congress may offer the States grants and require the States to comply with accompanying conditions, but the States must have a genuine choice whether to accept the offer.”

As a result, states would have to voluntarily select Medicaid expansion under the ACA. Over the next decade, 40 states have signed up (the last being North Carolina in 2023), while 10 have not. The impact on uninsured numbers was almost immediate. States participating saw their uninsured rates drop and preventive health measures rise. States who chose not to participate and stayed with financial eligibility that was on average at 40% of poverty levels (rather than at 138%) lagged far behind on all measures.

For states opposing Medicaid expansion with federal subsidies, the decision  proved costly. They were shown to spend more out of state coffers to support uncompensated ER care for their uninsured than the 10% contribution required after 2020. Yet they stubbornly held out in the hope of denying Democrats further victory as they celebrated states rights at a huge financial and wellness cost (higher mortality rates) to their citizens. The cost in dollars alone is increasingly difficult to justify. For example, Florida’s stubborn resistance left 1 million Floridians out in the cold, and cost the state $5.6 billion in immediate federal aid, and an additional $4.4 billion annually. 

Over the past two decades, the numbers of citizens covered by Medicaid have grown substantially, from 40 million to over 90 million. The pandemic reinforced the critical role that Medicaid played in assuring Americans health coverage. On April 1, 2020, Biden signed the Families First Coronavirus Response Act which bumped up federal Medicaid subsidies for two years by 6.2% in return for a mandated freeze of Medicaid enrollee status in all states.

Medicaid numbers increased from 71 to 94 million during this period. That Act sunsetted on March 31, 2023, allowing states to disenroll citizens deemed ineligible. This “unwinding “ is viewed as potentially destabilizing, and states from Missouri to Texas, and Tennessee to Idaho, have been accused of shady practices in trimming their Medicaid rolls. 

The ACA of course was not restricted to Medicaid enrollees. As part of the legislation, the Act also created subsidized “insurance marketplaces” nationwide, serviced by federally funded “navigators,” and made children eligible for parent’s insurance up to age 26, and prohibited insurers from disallowing coverage based on prexisting conditions. The popularity of these provisions contributed to Trump’s failure in 2017 to eliminate the ACA, with Senator McCann’s dramatic “thumb down.” Had Trump been successful, it is estimated that the number of uninsured would have increased by 32 million.

The pandemic and inflationary pressures allowed President Biden to expand the effectiveness of these markets. Increased federal subsides have lowered the cost to eligible consumers, as well as access, with eligibility now extended up to four times the federal poverty limit, or $120,000 a year for a family of four.

During the Trump years, from 2016 to 2020, enrollment in the health exchanges dropped 10% to 11.4 million as marketing and the use of navigators were all but eliminated. But in the following four years under Biden, enrollment skyrocketed by 87% to 21.3 million participants.That included 3 1/2 million Texans, and over 4 million Floridians.

Thirty-two states allowed the national HealthCare.gov to be their agent in the transactions, while 18 states chose to field their own websites.

If Obamacare was successful and popular, Bidencare is even more so. The original plan was plagued by an “ACA gap.” Medicaid for poor and disabled was hampered by deliberate state restrictions on access and work requirements, and Health Exchanges were underfunded with insufficient subsides to benefit middle income Americans. As a result, the poor were often underserved. And those with marginal incomes still made too much to qualify for ACA subsidies. By expanding Medicaid access and funding, while increasing eligibility to health exchange subsidies to 150% of poverty, President Biden has all but eliminated that gap, and enrollment has exploded.

So as Trump adds “eliminating Obamacare” to his campaign wish list, down stream Republicans best hope he’s just pulling their legs, and will soon go silent on the issue.

“Silence Is So Accurate” In A Hyperkinetic AI World.

Posted on | January 23, 2024 | 2 Comments

Mike Magee

“I mean, people keep saying in these troubled moments, in these troubled moments. It seems like we’re always in a troubled moment, perhaps this one even more so than usual…But the artwork is a great conduit to feeling that renewed hope for what the human being is capable of.” 

Those were the words of Christopher Rothko, son of world renowned artist, Mark Rothko. They were spoken last month, at the National Gallery of Art (NGA), on Constitution Avenue in Washington, D.C. before a live audience of 1000, and countless others via live stream video.

Chris was joined that day by his sister, Kate Rothko Prizel, gallerist Arne Glimcher, and NGA curator, Adam Greenhalgh, who moderated the hour long panel discussion celebrating the opening of the exhibition, “Mark Rothko: Paintings on Paper.” My viewership owed to the fact that I believe in the transcendent power of art, and like many of you, am searching for answers, for solutions, during “these troubled moments.”

Mark Rothko’s official bio has, on the surface, a certain currency today – (Russia, Zionist, immigrant, Ivy League, NYC elite, educator of children). The first paragraph reads, “Mark Rothko was born Marcus Rothkowitz in Dvinsk, Russia, on September 25, 1903. His parents were Jacob and Anna Goldin Rothkowitz, and Rothko was raised in a well-educated family with Zionist leanings. At the age of ten, Rothko and his mother and sister immigrated to America to join his father and brothers… From 1921 to 1923 Rothko attended Yale University on a full scholarship and then moved to New York City. In 1924 he enrolled in the Art Students League…In 1929 Rothko began teaching children at the Center Academy of the Brooklyn Jewish Center, a position he retained for more than twenty years.”

NGA correctly labels him a ”world-renowned painter” who most admire for his “monumental soft-edged rectangular field” paintings that radiate with color. But experts like Michael Andor Brodeur, classical music critic at the Washington Post, emphasize his deep connection to Mozart, and Rothko’s much quoted statement, “I became a painter because I wanted to raise painting to the level of poignancy of music and poetry.”

Most agree that Mark Rothko was deeply contemplative. Or in Brodeur’s words, created “diffusely defined panels of abutting colors — their most subdued hues ignited into a strange glow, their uncanny depth achieved through layers of paint and pigment, their silence overtaking every room they occupied… though they’re heavy with silence, it’s undeniable that his paintings also contain music.”

Rothko put it more simple: “Silence is so accurate.” Perhaps his most famous quote.

Viewers of Rothko are not passive. Many accounts document visitors “bursting into tears.”  NGA curator and Rothko expert, Adam Greenhalgh, explained the painter’s intent to “smile through tears” this way: “Rothko hoped his paintings conveyed basic human emotions — tragedy, ecstasy, doom — and that comes from tragedy…”

During his early years, the years Rothko was teaching art to children in New York City, he was working on a manuscript focused on what was the role of the artist in society. In a book about his father, written with his sister, Kate, Chris recounts that his father “sees the artist as someone who is almost like a soothsayer, someone whose responsibility is to some degree our conscience, but certainly to awaken us to all the ideas.” But his father never finished the book, according to his son, because “he realizes that he can actually paint that better than he’s writing it.”

Once his painting was “written,” Rothko worked hard to ensure that the viewer would be able to “read it” correctly. He insisted his monumental paintings be hung only 30 centimeters above the floor, so that you would experience a face-to-face immersion. Galleries were repainted in a light gray with brownish tones which the artist believed best completed his creations. And spot lighting the works was prohibited, as were stanchions that separated the visitors from the works. The works were grouped together in large space galleries, with benches if possible, to encourage long contemplation.

One could easily argue that our hyper-kinetic world, so distracted and anxious and fearful, a world where common ground escapes us, and health – in mind, body, and spirit – is elusive, needs other ways to reach out and touch, other ways to communicate.

On his father’s purpose, son Chris reflects, “he sets up the artist as sort of historically someone who’s not understood or discounted, but in fact, might have some things to say to us in a language that maybe isn’t the one that we speak all the time, but maybe goes a little deeper.”

If you happen to be in Washington in the next two months, and you have a few hours to burn, may I suggest a visit to the National Gallery of Art. The Rothko exhibit runs to March 31st. If not, buy or loan a copy of Chris and Kate’s book about their father (Mark Rothko), and set aside an hour to listen to “Mark Rothko: Insights from Arne Glimcher and the Rothko Family”, a remarkable conversation that highlights America’s complexity, strength, majesty, beauty and promise as we negotiate carefully the year – 2024.

Don’t Panic. This Is Democracy.

Posted on | January 18, 2024 | 2 Comments

Mike Magee

Shock and dismay were once again in the air this week as Donald Trump did his best to be thrown out of the courtroom in New York City during his unrequested attendance at his second Jean Carroll libel trial.

But as an engaged citizen, I thank the former President for crash testing our form of government. Despite driving a majority of Americans to despair, he has driven the crazies, and their minority followers, out into the open, where they can be examined and confronted. 

He has also meticulously probed the nooks and crannies of the checks and balances of our constitutional government for weaknesses to exploit. And his “What a’ ya gonna du about it?” gangster style has helped foster a certain alertness among true democracy’s defenders

Join me for a moment in Court with Trump this week:

E. Jean Carroll (on the stand):   “I’m here because Donald Trump assaulted me, and when I wrote about it, he said it never happened. He lied, and it shattered my reputation.”

Donald Trump (slumped over), slams hands on desk, and loudly whispers to his attorney.

Shawn Crowley (Carroll’s attorney) to the Judge:  “Mr Trump has been sitting at the back table and has been loudly saying things throughout Ms Carroll’s testimony. It’s loud enough for us to hear it. So I imagine it’s loud enough for the jury to hear it.”

Judge Lewis Kaplan to Trump’s attorney: “I’m just going to ask Mr Trump to take special care to keep his voice down when conferring with counsel, so that the jury does not overhear.”

Trump keeps at it.

Attorney Crowley  approaches the bench: “The defendant has been making statements again [that] we can hear at counsel table. He said it is a ‘witch-hunt’, it really is a con-job.”

Judge Kaplan replies: “Mr Trump has the right to be present here. That right can be forfeited, and it can be forfeited if he is disruptive, which is what has been reported to me, and if he disregards court orders. Mr Trump, I hope I don’t have to consider excluding you from the trial … I understand you are probably very eager for me to do that.”

Donald Trump: “I would love it, I would love it.”

Judge Kaplan: “I know you would, you just can’t control yourself in this circumstance, apparently.”

Last word Donald: “You can’t either.”

Now let’s just say it out loud. Trump’s a jerk, and a pain in democracy’s ass. As my mother frequently said to me, “You’re testing my patience.” But he know’s what he’s doing – playing to the court of public opinion, and making a few bucks along the way – while feeding his malignant and insatiable narcissism.

A century ago, there were only 15 democracies worldwide. There are now over 100 representing 2/3 of the global population. The ascendant nature of the basic model suggests progress not perfection.

Indiana University history professor John J. Patrick, in a brilliant little book, Understanding Democracy, dipped into democracy’s messiness, writing that “Differences in opinions and interests are tolerated and even encouraged in the public and private lives of citizens…Democracy in our world implies both collective and personal liberty.”

“Democracies are anchored by Constitutions which define the responsibilities of the various counter-balancing branches of government, and jury a system of laws or rules that apply to all citizens. The Constitution defines the limits on the power of government. It is a tricky balance. The democratic government must be powerful enough to maintain law and order. Yet it must be sufficiently restrained to avoid oppressing individual liberty.”

But maintaining an authentic democracy means keeping it real. Federalist #51, dealt with this delicate balance, stating: “If angels were to govern men, neither external nor internal controls on government would be necessary. In framing a government which is to be administered by men over men, the great difficulty lies in this: you must first enable the government to control the governed, and in the next place oblige it to control itself.”

Now say what you want about Trump, but he has inadvertently exposed a bunch of bad guys hiding in the cracks. I’m not just talking about the thousand or so January 6 militia insurrectionists sitting in jail cells now instead of instigating brawls in their local bars

 I’m also  talking about you, Leonard Leo, and your friends at the Federalist Society, who schemed for twenty years before successfully packing a Supreme Court willing to topple Roe v. Wade.

And you, Evangelical hypocrites, about who Marv Knox, who directs the Baptist Fellowship Southwest says: “Their theology runs the gamut, from neo-Calvinism to historic fundamentalism, with plenty of iterations in-between. Their attitude spans from aggressive preachers, politicians and internet posers to benign/benighted pew-sitters. But what they have in common is a dumbed down view of salvation, as well as a sold-out idolatry of political power.”

Point being, in the Donald Trump world of “me-me-me,” these characters can no longer hide. They have been forced out into the open by Trump’s constant need for public adulation, exposed, where their ideas and currency can be openly challenged.

We are already seeing the results of transparency. A very healthy democracy is throwing off the shackles of the Dobbs decision. By August, 2023, seven states had conducted abortion related ballot issues that protected abortion access. All seven ballots won easily. During this same period, poll after poll show large majorities of women see the Dobbs decision and regressive subsequent actions in Red States to be repugnant. 

Most women now believe the issue is reproductive freedom, medical autonomy, and much overdue casting aside the chains of patriarchy. In short, once the screen was down, Dobbs supporters faced an extremely rude awakening, which will likely undermine Trump’s hopes for a second term.

Does that mean we can take the eye off the Trump ball. Just the opposite. Why? Because as Professor Patrick reminds, “In an authentic democracy, the citizens or people choose representatives in government by means of free, fair, contested, and regularly scheduled elections in which all adults have the right to vote and otherwise participate in the electoral process.” And as we all know, Trump will cheat you out of a fair election victory if you give him half a chance.

This is exactly not the time to give up on democracy. Open the windows even if the air is frigid. Let freedom ring. And don’t let the door hit you on the way out of court, Mr. Trump.

Does AI Spell The Demise of Relationship Based Health Care?

Posted on | January 15, 2024 | Comments Off on Does AI Spell The Demise of Relationship Based Health Care?

Mike Magee

“What exactly does it mean to augment clinical judgement…?”

That’s the question that Stanford Law professor, Michelle Mello, asked in the second paragraph of a May, 2023 article in JAMA exploring the medical legal boundaries of large language model (LLM) generative AI. 

This cogent question triggered unease among the nation’s academic and clinical medical leaders who live in constant fear of being financially (and more important, psychically) assaulted for harming patients who have entrusted themselves to their care.

That prescient article came out just one month before news leaked about a revolutionary new generative AI offering from Google called Genesis. And that lit a fire.

Mark Minevich, a “highly regarded and trusted Digital Cognitive Strategist,” writing in a December issue of  Forbes, was knee deep in the issue writing, “Hailed as a potential game-changer across industries, Gemini combines data types like never before to unlock new possibilities in machine learning… Its multimodal nature builds on, yet goes far beyond, predecessors like GPT-3.5 and GPT-4 in its ability to understand our complex world dynamically.”

Health professionals have been negotiating this space (information exchange with their patients) for roughly a half century now. Health consumerism emerged as a force in the late seventies. Within a decade, the patient-physician relationship was rapidly evolving, not just in the United States, but across most democratic societies.

That previous “doctor says – patient does” relationship moved rapidly toward a mutual partnership fueled by health information empowerment. The best patient was now an educated patient. Paternalism must give way to partnership. Teams over individuals, and mutual decision making. Emancipation led to empowerment, which meant information engagement.

In the early days of information exchange, patients literally would appear with clippings from magazines and newspapers (and occasionally the National Inquirer) and present them to their doctors with the open ended question, “What do you think of this?” 

But by 2006, when I presented a mega trend analysis to the AMA President’s Forum, the transformative power of the Internet, a globally distributed information system with extraordinary reach and penetration armed now with the capacity to encourage and facilitate personalized research, was fully evident.

Coincident with these new emerging technologies, long hospital length of stays (and with them in-house specialty consults with chart summary reports) were now infrequently used methods of medical staff continuous education. Instead, “reputable clinical practice guidelines represented evidence-based practice” and these were incorporated into a vast array of “physician-assist” products making smart phones indispensable to the day-to-day provision of care. 

At the same time, a several decade long struggle to define policy around patient privacy and fund the development of medical records ensued, eventually spawning bureaucratic HIPPA regulations in its’ wake.

The emergence of generative AI, and new products like Genesis, whose endpoints are remarkably unclear and disputed even among the specialized coding engineers who are unleashing these forces, have created a reality where (at best) health professionals are struggling just to keep up with their most motivated (and often most complexly ill) patients. Needless to say, the Covid based health crisis and human isolation it provoked have only made matters worse.

Like clinical practice guidelines, ChatGPT is already finding its “day in court.”  Lawyers for both the prosecution and defense will ask, “whether a reasonable physician would have followed (or departed from the guideline in the circumstances, and about the reliability of the guideline” – whether it exist on paper or smart phone, and whether generated by ChatGPT or Genesis.

Large language models (LLMs), like humans, do make mistakes. These factually incorrect offerings have charmingly been labeled “hallucinations.” But in reality, for health professionals, they can feel like an “LSD trip gone bad.” This is because the information is derived from a range of opaque sources, currently non-transparent, with high variability in accuracy. 

This is quite different from a physician directed standard Google search where the professional is opening only trusted sources. Instead, Genesis might be equally weighing a NEJM source with the modern day version of the National Inquirer.  Generative AI outputs also have been shown to vary depending on day and syntax of the language inquiry.

Supporters of these new technologic applications admit that these tools are currently problematic but expect machine driven improvement in generative AI to be rapid. They also have the ability to be tailored for individual patients in decision-support and diagnostic settings, and offer real time treatment advice. Finally, they self-update information in real time, eliminating the troubling lags that accompanied “new releases” of original treatment guidelines. 

One thing that is certain is that the field is attracting outsized funding. Experts like Mello predict that specialized applications will flourish. As she writes, “The problem of nontransparent and indiscriminate information sourcing is tractable, and market innovations are already emerging as companies develop LLM products specifically for clinical settings. These models focus on narrower tasks than systems like ChatGPT, making validation easier to perform. Specialized systems can vet LLM outputs against source articles for hallucination, train on electronic health records, or integrate traditional elements of clinical decision support software.”

One serious question remains. In the six-country study I conducted in 2002 (which has yet to be repeated), patients and physicians agreed that the patient-physician relationship was three things – compassion, understanding, and partnership. LLM generative AI products would clearly appear to have a role in informing the last two components. What their impact will be on compassion, which has generally been associated with face to face, and flesh to flesh contact, remains to be seen.

« go backkeep looking »

Show Buttons
Hide Buttons