Are algorithms a danger to democracy?
In Argentina 4.0, The Citizens Revolution (Prometeo, 2013), we propose that democracy in the digital age, increasingly influenced by new technologies, would need some kind of “mediation” of public discourse.
Algorithms are performing that task. Are they the “institutions” of digital democracy? Ultimately, new technologies deposit in them the power to structure behaviors, influence preferences, generate consumption patterns, etc.
An algorithm is the formula used by a search engine (Google, Bing, etc.) to organize the answers they will give to information requests we make. They determine the ranking of appearance for a website or news reports against a query made by the user.
The mediation of the algorithms in its current version, however, does not seem to be the type of mediation that strengthens democracy.
In 2016, when the race for the nest year´s German federal elections began, former Chancellor Angela Merkel warned of the dangers posed by algorithms, holding them responsible for distorting the perception of voters, limiting the information they received based on their previous searches in internet engines, their geographical location, etc.
In this way, she alluded to the role of the so-called “filter bubbles” and “echo chambers” of Internet search engines. “This is a development to which we must pay careful attention,” she said, because “a healthy democracy depends on people coming up against opposing ideas.”
Around those years, Cambridge Analytica´s scandal broke out, a data analysis company that contributed to the victory of Brexit by harvesting information from Facebook users (without their consent) to direct the news they received or to probe them psychologically in a disguised way. Donald Trump’s campaign (2016) was singled out for using the same tactics.
The danger posed using algorithms has grown since, along with increasingly powerful versions of artificial intelligence, based on the exponential growth of “machine learning” and the use of Big Data.
Yuval Harari defines the phenomenon as dataism or “cult of data” (Homo Deus, 2017), which turned “a limited breakthrough in the field of computer science into a world shattering cataclysm”. For dataism, “organisms are algorithms” that process data. Colonies of bacteria, forests and cities are data processing systems.
Even the ways of organizing the economy (through the market or with state planning) and political systems (democracies or autocracies) are competing data processing systems looking to predict the behavior of the systems they govern.
In a seminal book (The Age of Surveillance Capitalism, 2019), Shoshana Zuboff takes the debate further, proposing the rise of “a new economic order that reclaims human experience as free raw material for hidden commercial practices of extraction, prediction, and sale” and a “parasitic economic logic in which the production of goods and services is subordinated to a new global architecture of behavior modification”.
Just a few years after Merkel complained that algorithms chose the results that were shown to us first as a result of our Internet searches, limiting the information we receive based on our preferences, Zuboff argues that new technologies can go further. and show us information in order to modify our behaviors and determine our preferences.
Anita Gurmurthy and Deepti Bharthur warned in their “paper” (Democracy and the Algorithmic Turn, 2018) that citizens, “masterfully manipulated by data-based tactics… find themselves increasingly located on the respective sides of an ideological divide”.
More recently, professors Birgirt Stak and Daniel Stegmann (with Melanie Magin and Pascal Jürgens) published a paper that tries to answer the question stated at the beginning of this article (“Are algorithms a threat to democracy? The rise of intermediaries: a challenge to public discourse”, 2020)
They estimate there that “social networks facilitate the dissemination of false news and hate speech”, although “individuals’ exposure to these phenomena is limited”. For them, “fears that algorithmic personalization will lead to filter bubbles and echo chambers are likely to be exaggerated, but the risk of fragmentation and social polarization remains.”
It is probable that in 2023, technological progress has increased the risks signaled by these authors, who conclude their “paper” proposing a “research agenda on the governance of digital platforms”.
This is the same warning expressed in “Argentina 4.0 The Citizens Revolution”, regarding the need to find mediation mechanisms for public discourse in the digital age.
There is no doubt that digital democracy has put the very concept of citizenship at a crossroads. Autocrats have learned how to use new technologies to polarize voters and maintain power.
That´s why is key to regulate the ability of large digital platforms to influence public opinion (or at least better understand it), in order to preserve citizens freedom of choice and the healthy functioning of democracy
The “digital democracy” refers to a new phase of democratic practice, led by innovations aim at improving politics´ capacity to deliver practical solutions to concrete problems, using the new information and telecommunication technologies.
The wave of protest demonstrations around the world and the growing number of self-declare “outraged” points to such democratic deficit, increasingly perceived by a growing number of citizens.
In recent years there was much talk about a “democratic recession” dominating world politics, a concept first coined by Larry Diamond in 2009. Much had happened since then
The latest report from Freedom House (2022) states that “…present threat to democracy is the product of 16 consecutive years of decline in global freedom…60 countries suffered declines over the past year, while only 25 improved. As of today, some 38 percent of the global population live in Not Free countries, the highest proportion since 1997. Only about 20 percent now live in Free countries.”
No surprise then, societies around the world are trying democratic innovations. It is the same idea we promote from this blog and the pages of “Argentina 4.0 The citizens Revolution” (2013), under the concept of “Digital Democracy”, referring to the potential contribution new technologies could offer for democratic renewal, in an era dominated by digitalization of every aspect of our life.
Most efforts are focus on strengthening decades old community engagement programs, leveraging them on the possibilities offer by new technologies.
Indeed, “citizens assemblies” and “panels” have been there for a while. They are now regularly implemented through stratified selection techniques, to give all citizens equal chance of participation and secure representation from diverse sectors of society. Its debates are very structured around present remits or elaborate formal institutional processes.
At the heart of these processes lies the need to educate the sovereign, at least on the subject under discussion. This is precisely the concept embodied in the citizens jury approach, built on the idea that groups of everyday citizens can create powerful solutions to today’s biggest challenges, when given the knowledge, resources, and time
The Center for New Democratic Processes (former Jefferson Center) pioneer the concept, structuring it around phases including: a) understanding the challenge; b) design the processes; c) invite the community; d) select participants; f) provide information; g) facilitate deliberation; h) create recommendations and i) amplify and share
These “democratic innovations” are tried mostly on western democracies. An interesting project run by the European Democracy Hub (reported by Carnegie Europe) explore innovations tried beyond the western world, “grouped into three clusters: 1) efforts to extend democratic participation within existing consultative processes; 2) more open forms of participation that involve relatively large numbers of citizens; and 3) attempts to connect citizen participation to other political actors.
Among the first, for example, the report quotes “online petition process” that, although they do not include deliberative instances, facilitate citizen participation in the construction of the public agenda (South Korea) or participative bodies and platforms to allow citizens participation and deliberation on priorities and government performance at municipal level (Georgia and Nigeria).
Under the second, open participation mechanisms somehow loose the boundaries of to stratified selection techniques (mentioned above) to create a wider space for interaction between governments and civil society organizations (CSOs) (mostly in Latina America and India).
The last cluster refer to programs focus “on fostering more systematic interaction between online citizenship and public-authority decision-making”, including citizens jury’s (Malawi, Nigeria, Ghana, South Korea, Taiwan, North Macedonia, Latin America).
Citizens Assemblies on Social Care, counties rural Climate Dialogue, citizens Jurys on artificial intelligence or pandemic data sharing, all-inclusive village parliaments, social audits and other innovations are building a body of new democratic experiences. They are valuable experiences to the extent it aim at strengthen democratic representation rather that to substitute it.
While we may be still quite far from closing the gap between citizens and its representatives, a strong and renew “technology-driven” citizens participation drive is forcing political elites around the world to innovate and facilitate common people engagement.
As projected in decade-old writings (including Argentina 4.0 The Citizens Revolution) local governments are taking center stage in the process and remain the best hope for a genuine and lasting democratic renewal.
A few days ago, upon receiving a report on the first modular house built by a 3D printer (with biodegradable materials) in the USA, I wondered how relevant this news was in the paradigm shift facing the construction industry.
This sector has been using 3D printers for some time. From humble beginnings, in the mid-1980s, when designers created a pattern that was then printed layer by layer on a physical object (using a laser aimed at liquid photopolymer to make it solid, a process known as “stereolithography – SLA”), the use of those printers have come a long way.
At the beginning of the 2000s, the Autodesk company presented a “White paper” entitled “Building Information Modeling (BIM)” which, collecting the research work of previous decades, defines the holistic process of creating and managing information for a built asset, “based on an intelligent model and enabled by a cloud platform”.
This model “integrates structured and multidisciplinary data to produce a digital representation of an asset throughout its life cycle, from planning and design to construction and operations.”
In 2006, the University of Southern California (USA) presented the “Contour Craft System (CCS)”, with a printer that works like the one we have on our desk, using a crane to print and concrete to place the structural elements of the building. MIT (USA) practices 3D printing using a large, controllable robotic arm to spray materials, such as concrete, through traditional construction nozzles.
In the mid-2010s, projects by Dutch companies appeared. One of them uses a giant printing arm to build a typical canal side house out of plastic. Another prints a fully functional stainless steel bridge and places it over one of Amsterdam’s oldest and most famous canals, using multi-axis printing (MX3D) technology.
In those years, an American company associated with the Oak Ridge National Laboratory (of the US Department of Energy) to produce efficient modular homes printed in 3D with a combination of renewable energy and natural gas systems.
The Chinese also unveil their projects around the same time. They announce the “printing” of a two-story mansion in 45 days and 10 houses in 24 hours.
Since 2018, numerous projects have proliferated. Apis Cor, BatiPrint, Black Buffalo 3D, COBOD, Constructions 3D, Contour Crafting, Cybe Construction, Icon, Mighty Buildings, Mud Bots, SQ4D, Wasp, XtreeE, are some of the companies that carry them out.
In 2021, the Eindhoven University of Technology’s “Milestone Project” (Netherland) delivered the first of 5 houses it plans to build in the city, with an energy coefficient of 0.25, considered highly efficient.
Although the number of projects is constantly growing, is clear that 3D printing is not yet mainstream in the construction industry. However, the projections for the coming years seem to indicate that we are steadily heading towards a tipping point.
Different reports indicate that the use of BIM among architects is constantly growing and could cover 80 or 90% of their projects by 2024. Similar percentages are reported for the use of BIM among structural and civil engineers by the same year. Among contractors the estimated percentage of current BIM use of around 40% could grow to 70% of the projects they work on by 2024.
The differences in the perceived return on investment in the different branches of construction (architects, engineers or contractors) explain the reported projections, so that greater help to understand and use BIM among professionals could increase the perceived return and its use.
The house built in Maine, by the Research Group of the Center for Advanced Structures and Composites (ACSS), of the University of that State, which led me to write these lines, is called BioHome3D and consists of 4 modules.
The walls, floor and ceiling are made of wood fiber and biosmol. The house is highly insulated and recyclable and only took 12 hours to build. Gagadget.com reports that it took a single electrician two hours to hook up the power.
Some point out that 3D printed houses could be a solution to reduce the cost of housing and pollution from the construction industry. Considering the speed at which new applications are developed in this industry, it is possible that we will have much more news to stimulate us to write again in the next 5 years.
In 2013, Professor Mark Post, from the University of Maastrich, who was working on the repair of heart tissue, led an experience that would mark a milestone: he managed to produce the first hamburger made from meat grown in a laboratory.
After two years of work and, at a cost of US $ 325,000, he thus turned a typical plot of a science fiction movie into a solid scientific reality.
Since 2010, prominent technologists and visionary entrepreneurs have embarked on the development of a new food industry based on the production of proteins in laboratories. Many smart people are dedicated to making the technology viable, with the idea that it could dominate the future of food. It will?
Since December 2020, the company Eat Just (Good Meat) has been selling laboratory-grown chicken meat in Singapore (the first country to approve the product for human consumption).
As reported by CNBC’s “make it” blog, as early as March 2021, the chicken nugget is now available at Singapore’s 1880 restaurant, retailing around $17 for a prepared meal last year.
As Upside Foods, founded in 2015 (which claims to be the world’s first dedicated cultured meat company) reports, “what makes our meat (beef and chicken) unique is how it’s grown: we take a small sample of healthy chicken cells. We put it in a nutrient-rich environment and allow it to turn into pure, clean meat, ready to cook and enjoy.”
This is a process that has been called “cellular agriculture”. And it doesn’t just reach beef or chicken.
This year, Upside Foods acquired Cultured Decadence, a cultured seafood company, achieving a record $400 million fundraising round. And the company Perfect Day, established in 2016, with offices in the USA and India, is dedicated to the production of animal-free protein from a fungus genetically programmed to create it, using the same precision fermentation technology responsible for medical insulin.
From milk to ice cream to cream cheese, Perfect Day’s milk protein is now available in over 5,000 stores across the US But instead of being made by cattle, it’s made from cow whey protein. Does not contain lactose.
Another company, Just Egg, of the Eat Just group, discovered the mung bean, a protein-rich legume commonly used in Asian cuisines. And in 2018, Just Egg was born. To date, the company has sold the equivalent of 100 million plant-based eggs at major retailers including Walmart, Whole Food Markets and Alibaba.
Arguments in favor of cellular agriculture production prominently include protection of the environment.
It is true that beef and milk are two commodities with the highest emission records in the agricultural sector (they represent 3 and 1.6 gigatons of carbon equivalent – at 2010 values – out of a total of 8.1 in the sector) according to reports the control panel of the Food and Agriculture Organization (FAO)
However, the contribution of cellular agriculture to the reduction of carbon emissions by the sector is still a matter of discussion. A study published in “Frontiers” in 2019, concludes that “cultured meat is not prima facie climatically superior to cattle; instead, its relative impact depends on the availability of decarbonized energy generation and the specific production systems that are carried out.”
The trial is open. Meanwhile, the parties will argue feverishly for and against them productive systems that compete to define the future of food in a more populous world that will demand more and more protein
As we argue from these pages, technology disrupts every aspect of our lives; the way we communicate, consume and work. Working to order, without a doubt, is not something new in industry or services. It exists since the dawn of economic activity.
What is new, in any case, is the proportion acquired by the sector of the economy that gathers its income in this way, working for hire and acting as “free agents and occasional earners.”
Nobel laureate in economics Ronald Coase anticipated this might happen in his theory of transaction costs. According to him, companies only hire staff directly when the costs of incorporating employees are lower than those of hiring contractors or third parties outside the company.
The labor market that emerged after the 2007/8 international financial crisis thanks to transaction costs’ reduction in companies doing outsourcing – boosted by technological progress – brought the proportion of independent workers to very high levels.
So high that they deserve their own name: the “gig” economy. That word refers to the transitory character of the work itself; comes from musical jargon and refers to short performances by musical groups.
The “gig” world includes consultants, independent contractors and professionals, as well as freelancers and temps, and has been under close study.
However, since it is technology that makes it possible, it has become difficult to classify what counts as part of this economy and what does not. That is why there are contemporary analysis saying that it is growing, while others point out that it would be contracting slowly.
When we talk about the gig economy, people think of Uber, Lyft, TaskRabbit, and Airbnb. They are companies that have popularized the concept globalizing its operations. But we should also add multi-job holders, contingent and part-time workers as well as highly specialized consultants and contractors to the list.
Flexibility, “online” communication and offshoring are frequently quoted as the main characteristics allowing the expansion of the “gig” economy. With these tools, companies can use talent located thousand miles awas, in different time zones.
It is likely that, as happened after the financial crisis of the first decade of the century, the “gig” economy will grow strongly once the SARS-CoV-2 pandemic is definitively left behind.
That`s why it is worth analyzing future scenarios. For example, the relationship between the economy “gig” and “blockchain” technology.
The Massachusetts Institute of Technology (MIT) has launched a “blockchain wallet” where its students can carry their degrees and credentials, thus accrediting their knowledge and skills.
Platforms such as Coinlancer and Ethlance incorporate blockchain to increase the transparency of economic transactions between clients and contractors, using cryptocurrencies as remuneration.
There are even fintechs that are creating specific tools for this economic model (such as Azlo start-up).
As is always the case, in times of change is necessary to balance emerging opportunities with difficulties arising from this true revolution in the labor markets of the 21th century.
In many countries, social security is linked to standard employment contracts, this represents a social problem that must be addressed. Likewise, the typical employment relationship of “office environments”, which facilitates interaction with other workers, peers or superiors, to solve problems, acquire or improve skills is not present.
Simultaneously, aspects of our daily lives that were previously reserved to domestic activity, such as cooking, are opening up to private capital and venture investment.
While some workers see the gig economy as an opportunity to supplement their formal income and others are unwilling to leave their status as free agents or occasional earners, many wonder if the difference with previous outsourcing experiences (which were associated with job insecurity) lies in the willingness of workers to use new technologies to organize their income and whole working life in a new way.
Computer science, the main discipline that led us from an age of change to a “change of age” has crossed a new frontier. It can now operate on “exaflops”.
Increasingly, the development of science is based on “simulation models” that demand greater computing capacity. The more precise and detailed the “models” are, the better results can be obtained in the discipline under study
For this reason, today, the production of the best science is in the hands, to a great extent, of those who have the best computer equipment.
There is a publication that tracks the 500 fastest supercomputers in the world. A recent article, published in the prestigious financial newspaper “Financial Times”, gives an account of the two annual reports that are published on the subject.
According to this report, the most powerful computer that has ever existed is located on the “campus” of the Oak Ridge National Laboratory (of the US Department of Energy), and was baptized “Frontier”.
It is a $600 million machine with 74 cabinets (each one weighs as much as a truck) cooled by more than 22,700 liters of water per minute and connected by almost 145 kilometers of cable, capable of processing a billion billion operations per second (a measure known in computing as exaflop)
The processing speed of these supercomputers accelerates regiularly. In the last 7 years it grew 10 times, from the order of 100 million billion to a billion billion per second.
In total, 173 of these super machines are in China, 35% more than the United States has (128). In this way, the Asian country reaffirms its leadership in a critical field for scientific and technological progress, an area in which it reveals primacy since the 2000s.
It is interesting to note that, even though these computers have a special application in science, half of them are in the hands of industry, a fact that once again demonstrates the importance of public-private articulation in strengthening the innovation ecosystem for a modern economy.
These super-equipment’s, applied to the formulation of climate, energy, aerospace or biomedical “models” will allow the development of more appropriate solutions for the challenges we face in all these fields.
For this reason, it is necessary to promote, through international cooperation, mechanisms that ensure healthy competition to develop more and better supercomputers, preventing geopolitical tensions from delaying progress and solutions we seek.
The COVID 19 pandemic has taught us, in a very painful way, how much we can gain by sharing information and working collaboratively and how many lives are at stake when we act against common sense.
A civilization that has managed to develop the technology on which these supercomputers are based should be able to rescue the ethic of collaboration to strengthen the provision of global public goods.
We´ll not offer an academic analysis of economic theories here. Furthermore, it could not be said that the “new” growth theory was developed from the emergence of the new economy. In any case, it precedes it several decades and explains it.
What both have in common is the central role assign to innovation and technical change in promoting economic growth.
The concern to determine the engines of growth has been present among practitioners since the very emergence of economics as a science.
Adam Smith raised it in the title of his founding book. A century later, Alfred Marshal places the search for growth at the center of economist´s concerns.
This search intensified at the end of the Second World War, triggered by the decolonization process, which increased the number of member countries of the international community from six dozen to the current two hundred.
In this note we will not analyze outstanding contributions to economic development thought made by great professionals such as Raul Prebish and Hans Singer, Chandra Mahalanobis or Paul Rosenstein Rodan. We will just review three ideas in order to highlight the role of innovation and technical change.
The Harrod-Domar model (1946) provides the first powerful insight for postwar economists, basically stating that the level of investment in a given period determines economic growth in the next.
Although its authors ruled out this approach as a growth theory, even today these ideas informs the analyzes and decisions of multilateral credit organizations.
A decade later Robert Solow (1957) makes a decisive contribution. The Nobel laureate stated that countries` long-term growth could only be explained by the rate of technological change in a given economy, due to the law of diminishing marginal returns.
According to this law, the use of additional units of a productive factor (capital or labor) generates less than proportional increases in the final product. Therefore, it is not possible to grow in the long term by only increasing the factors of production.
It is necessary to change technology, use new materials or different production techniques to maintain constant growth.
Solow believed that the process of technological change could not be easily influenced. For him it depended on extra-economic factors, such as the development of basic science.
Three decades later, Robert Lucas and Paul Romer (1986/1987) argue that the law of diminishing returns does not hold in the presence of technological change. And that such change is not only due to non-economic factors but can be influenced by public policies.
A simple conclusion from this brief analysis tells us that accumulating capital is key to economic growth. No doubt. Our understanding of the ways to attract and accumulate it, however, have been enriched over time, giving a central role to innovation and technical change.
From the simple formula of domestic savings supplemented by external resources (private investment) to the more elaborate proposals of Solow, Lucas and Romer where the accumulation of capital must be articulated with the technological change of the economy to sustain long-term growth, the role of knowledge and innovation will only continue to grow through the years.
Productivity growth, innovation and technological development are, in themselves, decisive sources of attraction and accumulation of capital.
These are factors that complement each other in an essential way with the building of peoples´ social capabilities in the field of education, health, science and the technological system.
Preserving growth patterns and creating more quality jobs are very much needed in a global economy dealing with the aftermath of support packages implemented to face the pandemic and the events unfold for the renew trend to fight turf wars.
The return of inflation, the decoupling of global value chains and the disruption in global production networks, the food and energy crisis have turned upside down, in a few months, the trends that have organized the international economy in recent decades
Meanwhile, technological changes of great disruptive power do not stop, and continue to shake well-established business models and challenge traditional assumptions about consumption, resources, labor, capital and competition.
Would it be too ambitious to propose that the G7 and G20 leaders create a global compact for SME and entrepreneurship development?
Talking about major macroeconomic trends we leave aside quite often ley issues for local economies. SMEs are the backbone of the entrepreneurial community.
Unless we try something like a Global Compact for SEMs, insufficient results in quality growth and job creation will persist. The SME policy toolbox needs to be consolidated, renewed and upgraded. The global economy is changing fast. Again
A global compact for SMEs and entrepreneurship development could help unleash entrepreneurial spirit and the potential of SMEs in a global economy facing a challenging context. Chances to face an “atlantic” recession (if not global) in the next eighteen months are not low.
While its detailed framework deserves proper thinking, past policy practice suggests a focus on limited and workable priorities to target common SMEs hurdles, for example:
■ Access to capital to complement traditional bank lending with instruments to strengthen asset-based finance among SMEs, including alternative debt mechanisms (corporate bonds, securitised debt, covered bonds), hybrid instruments (combining debt and equity), public SME equity markets and crowdfunding.
■ Access to international markets and knowledge flows. This could be achieved by working on global value chains and global production networks to create basic standards and disseminate knowledge for SMEs, enabling them to acquire the skills needed to participate more actively in the global economy.
■ Access to global and local innovation networks, public research and procurement opportunities to stimulate collaboration among universities, research laboratories and SMEs, such as environmentally sound technologies, new materials and other technologies related to climate protection.
■ Improved management skills, including risk assessment, strategic thinking, networking, decision-making, information processing and other similar skills through SMEs and entrepreneurship educational programmes.
Firm age and size are important. Young innovative firms are net job creators (although they could have limited weight in the economy). Teaching SMEs how
to handle intangible assets and intellectual capital could also provide opportunities for new firms and create new jobs. A special SME section in multilateral trade agreements (as tried in the latest versions of the same) could also be promoted.
Adding a local dimension and working at city or district level could also be significant. A global database of instruments’ performance information and best practices could be of great help.
Government and business leaders must prepare for a very different reality. New approaches are needed even for well-known challenges, especially if we want to create jobs for inclusive, quality growth.
In February 2003 we had a first warning. SARS, a coronavirus detected in Asia, spread to 24 countries in America, Europe and Asia, before being stopped in July 2003.
Then MERS appeared in the Middle East. Due to its characteristics (originated in camels and transmited by very close contact, in hospitals) it didn´t become a global threat.
In January 2020, the first pandemic in a century finally arrived, caused by SARS-CoV-2. As has been said in this blog, it generated a triple crisis: in health, economics and leadership, all of them interconnected and reciprocally fed back.
In a year and a half, more than 170 million people have been infected and more than three and a half million deaths recorded (some calculations take that figure to 10-13 million when analyzing the surplus of reported deaths compared to previous years). Global economy contraction is estimated at 3.5%, the biggest peacetime decline in more than a century.
In this bleak context, science and technology became our hope. Rand rightfully so.
Sharing information and publishing material, the scientific community dedicated to fully exploiting the knowledge infrastructure accumulated since the last pandemic (Spanish flu of 1918) about antibody measurement, evaluation of its functioning in the viral system, life cycle of the virus, etc., to produce a coordinated response.
And it did so at unprecedented speed. Sequencing the genetic code for SARS in 2003 took three months. In 2020, decoding the one for COVID-19 took just a few days. Three months later, the first vaccines began their clinical trials. In just one year, the first vaccines were approved for emergency use in many countries.
Today we have more than fifteen vaccines, using different technologies.
Inactivated virus vaccines Adenovirus vector vaccines
Sinopharm (BBIBP) Oxford-AstraZeneca
Sinopharm (WIBP) Sputnik V
Coronvac Sputnik light
Covaxin Johnson & Johnson
Covivac Convidecia
QazCovid-in
Protein subunit vaccines RNA messenger vaccines
EpiVacCorona Pfizer-BioNTech
RBD-Dimer Moderna
Some advances, such as vaccines using messenger RNA, are truly amazing. In few months of 2020, science was able to synthesize and apply research that took decades, in order to develop new vaccines.
Basically, messenger RNA is a molecule that “bridges” the genetic information contained in a cell’s DNA, from its nucleus to its plasma, where it is translated into proteins.
With this technology, vaccines that use messenger RNA instruct the human body to produce the “spike” protein, located on the surface of the virus, triggering the generation of antibodies that reject COVID-19. As you can see, you do not need to inject an attenuated version of the virus into the body, as traditional vaccination technology does.
RNA is unstable and it is not easy to work with. With the help of nanotechnology, scientists managed to deposit it in a small lipid particle, to later design a version of that nanoparticle that could be safely injected into humans.
Innovations in synthetic biology did the rest, making a key contribution to developing vaccine production based on this technique.
The RNA messenger is an extraordinary achievement. It promises unexpected applications in a short time. It could even make possible faster new vaccines development.
More vaccines, of better quality, applied through effective vaccination plans, offer prospects for the recovery of the economy and our freedoms. Almost 2 billion doses have been applied in less than 6 months.
We were right to trust our hopes in science. It lived up to expectations and has been able to bring about changes that cut across physical and knowledge boundaries.
Once again, it offers us the opportunity to think about a better future. And make SARS-CoV-2 the latest pandemic.
Answering the question is challenging. Many definitions have been proposed. I will try to briefly explain how the emergence of this new phenomenon, which has been on the making for several years, can be analyzed. Humanity’s cultural and social changes can be tell through innovation and technical change.
The fire, the wheel, the printing press or the steam engine took decades to fully realize their potential. They did so only when they were adopted by ordinary people through apps that gradually influenced the way daily life is organized.
The same is happening with the information technology and telecommunications revolution over the past three decades. The international financial crisis of 2007/8 fueled the process. The SARS-CoV-2 pandemic that we are going through is accelerating it again and reinforcing its characteristics.
Changes are so dramatic that we hardly recognize the economic organization of the 20th century. Each technological innovation in its time was applied to various fields of human activity. None, however, had such a wide diffusion and an impact as singular as the information technologies of our time. The appearance of the steam engine is probably the most similar example. Today, I&T technologies are present in every aspect of our life, permanently and increasingly.
Even in that context, I wouldn’t dare to explain the “new economy” as a phenomenon defined by a unique and particular technology, because it goes beyond any of them individually. Its heart is in the relationship that emblematic sectors establish and the way they fertilize each other, as well as the manner they influence the traditional products and markets of the old industrial and agricultural economies.
It is characterized by the appearance of new economic actors that operate under new rules of the game and demand new institutions for an economy that shift the business focus towards new objectives, operating in a new geopolitical balance.
How does the “new economy compare to the economic organization of the 20th century?”
Practically, none of the elements that make up the landscape of the new economy (actors, institutions, rules or sectors) is necessarily new for entrepreneurs or unknown to citizens. Many of them, individually, are even familiar to us.
The new economy is the result of the changes that are taking place, at the same time, in all these fields and in the way they are reinforced and strengthened.
When we speak of the emergence of a new economy, we do not mean that one model of economic organization replaces another once and for all, but rather that a process marked by eventual and gradual enrichment takes place – through a torrent of innovations. simultaneous technologies in various fields of science – which fertilizes the existing economic organization with the emergence of new elements, until it completely changes its physiognomy.
When I first wrote about this topic in 2013, I had already seen the effect that the international financial crisis had on the emergence of a new economy, accentuating trends and processes. Many regulations that I anticipated in my books (about “rules of the game” or “institutions”) began to be discussed and some were implemented.
I could not imagine then that COVID-19 would accelerate the definition of new roles for economic actors and agents, while highlighting the critical importance of international cooperation for the provision of Global Public Goods and the key role of political leadership for the nations.
In any case, the new economy is already among us, with a more defined presence every day.