Atomic Cs

Breaking

Friday, May 17, 2019

May 17, 2019

WHAT IS A PHOTON? DEFINITION OF PHOTON


WHAT IS A PHOTON? DEFINITION OF PHOTON

What is a photon? The definition of the photon would happen to say that it is a certain particle of light that propagates in a vacuum. Basically, the photon is the particle from which the quantum manifestations of the electromagnetic phenomenon arise. The photon is the one that carries from one point to another the electromagnetic radiation, in its diverse forms of presentation.

A photon is an elementary particle, in which all forms of electromagnetic radiation converge. Also light. When we talk about the forms of electromagnetic radiation, we speak from microwaves to gamma rays, through x- rays, infrared light, radio waves or ultraviolet light. There are many others, but these are the most common. The photon is the one in charge of tertiary with the electromagnetic force.

It is not easy to answer what a photon is or the definition of a photon. And the terminology in this sector is not the one that is handled daily. Still, we will say that a photon, by having a mass that is not altered, can travel through the vacuum at a constant speed. For this reason, a photon can be analyzed both microscopically and macroscopically.

And above all, doing so while maintaining its corpuscular and wave properties, which allow it to act as if it were a wave. At least, in special circumstances such as when refracting a slow. In turn, the photon will remain a particle, as long as it maintains its amount of unaltered energy when it reaches the material.

For example, a lens can reflect a single photon and, during the refractory process, interfere with itself as if it were a wave. In the same way, it would not stop being a particle with a very specific position and a quantity of movement that we could quantify. And, both the wave and quantum properties of the photon can be measured in a single phenomenon. Of course, photons cannot be located in space.


PHOTON HISTORY

It was Albert Einstein who named the photons, yes, it was not the name he gave them but the quantity of light. It was the early twentieth century and with these light quanta, Albert Einstein sought to explain experimental observations that did not fit in his investigations of the usual model of light as an electromagnetic wave.

Einstein's redefinition of the quantum of light or photon, established and accepted that the energy of light was dependent on its frequency, in addition to giving truth to the fact that matter and electromagnetic radiation maintained a thermal balance between them. The thing was not there. The new photon concept also addressed anomalous observations of the caliber of blackbody radiation.

An explanation that had resisted the majority of physicists, or that had been impossible to expose using the well-known semiclassical models. For example, Max Planck, a renowned physicist, defined light by means of Maxwell's equations. The problem was that the light is thrown and assumed by material objects that came in small groups of energy.

Although these investigations were the basis of quantum mechanics, it was demonstrated, after a short time, that Albert Einstein's hypothesis was correct. Without going any further, through the effect of Compton.

Years later, in 1926, the physicist Gilbert Lewis, in the company of the optical physicist Frithiof Wolfers, would be the ones who would change the concept of quanta of light to the current concept of the photon, which has remained to this day. The term photon is taken from the Greek word that precisely means light and comes to the hair.

A year later, and after being awarded the Nobel Prize in Physics to Arthur H. Compton, thanks to his theories on dispersion, the scientific community embraced the fact that light quanta existed independently. In addition, they accepted the term photon to replace the quantum of light.

What's more, in physics, when speaking of photons, the Greek letter gamma is used to mention them. It seems logical that if the gamma rays come from the photons, and the origin of the name of these is Greek, the Greek tradition will continue.

However, things change when we talk about chemistry. Even of optical engineering. In this case, the photons are marked by the symbol hv. These two letters come to mark the energy associated with each photon. A photon that, as we have already mentioned, does not have its own mass, but does not have any electrical charge either. For this reason, when it comes into contact with the vacuum, it does not disappear immediately.

The ways in which photons are emitted are as common as they are natural. Without going any further, when a particle is accelerated by an electrical charge during a molecular transition. Another valid example is the moment in which we are ready to make a particle disappear, applying its antiparticle.

This model, already standardized around photons, is given by the fact that the laws of physics are almost symmetrical in space and time. Elements of particles, such as mass or electrical charge, are marked by this symmetry. Thus, photons have been applied to sectors such as high-resolution microscopy, photochemistry or the measurement of molecular distances. A work that began more than a century ago, but that continues giving yields and advances to science, from the hand of a pioneer like Albert Einstein.

May 17, 2019

NUCLEAR ENERGY, GOOD OR BAD?


WHAT IS THE NUCLEAR ENERGY?

We start with the basics: what is nuclear energy? Nuclear energy is the energy that is generated in the atomic nucleus, which is the central part of an atom. On the other hand, atoms are the smallest particles in which a material can be divided.

To know what nuclear energy is, we must know that in each atom there are two types of particles: neutrons and protons. Both particles are always united and are thanks to nuclear energy, which exerts enough force to do so.

Thanks to nuclear technology, the human being was able to convert that nuclear energy into other more usable energies. The most common and the most consumed in the whole world is electric power.


HOW NUCLEAR ENERGY IS PRODUCED

Once we know what it is, let's learn how nuclear energy is produced. A production that can occur in two different ways: nuclear fission or nuclear fusion

  • Nuclear fission

The nuclear fission occurs when the nucleus of an atom is split. In this case, to do so, the usual thing is to violate the core itself. When fragmented, its mass is lower than usual. Even if we add all the fragments, the final mass is inferior to the original one. The mass that remains will be transformed into energy.

Fission nuclear is a process that can be reached in two ways:

+ Induced fission - Occurs when the nucleus of an atom traps a neutron

+ Spontaneous fission - As the name indicates, it occurs unexpectedly. In this case, when the isotope is destabilized.

  • Nuclear fusion

The nuclear fusion, meanwhile, is when two equal atomic nuclei bind. In this case, they must be lightweight and emit neutrons. When it is produced, the core absorbs a lot of energy, which can be obtained thanks to the gamma rays and the kinetic energy generated by the movement of the neutrons.

Nuclear fusion, to come to occur, must meet several premises that do not always occur. To begin with, the temperature must be extremely high so the electrons of the two nuclei will separate. Secondly, it is imperative that the two atoms are confined, so that their temperature does not drop. Finally, the plasma that ends up forming must have a specific density to unleash nuclear fusion.

If one of the three requirements is not met, there will be no nuclear fusion. A nuclear fusion that, like all this energy, is highly dangerous.

HOW NUCLEAR ENERGY WORKS

It's time to know how nuclear power works. As we have already mentioned, it is the collision between several elements of the central core. Now, what causes that collision is the plutonium, uranium, and thorium that the neuron receives. From that moment, the energy creates two other neutrons that collide again with their gift nuclei, and so on successively.

Understanding how nuclear energy works is to understand that this process generates huge energy, especially heat. The objective of the nuclear power plants is that this process is controlled. And they have achieved it, obviously.

To begin with, diluting the fusionable material and reducing the speed of the neutrons. For this, they use the so-called moderator. Once the process is controlled, the nuclear power plant has the opportunity to store that energy. It does so with cooling elements that not only cool the energy, but also the nuclear reactor. Beware, the refrigerant, in many nuclear power plants, is water.

In fact, what failed at the Fukushima power plant in Japan was the main cooling system and the reserve cooling system. Thus, the reactor overheated and the pressure caused the steam to escape from the plant, despite being closed in a sealed manner. Obviously, that vapor that he released was radioactive.

And it is that, usually, the nuclear reactor is at a temperature of 1,200º, but the risk of nuclear fusion occurs when it reaches 3,000º

CHARACTERISTICS OF NUCLEAR ENERGY

Lastly, we review the characteristics of nuclear energy. Characteristics that make it a source of energy without equal but with many risks to assume.

Among the characteristics of nuclear energy, the most recognizable is its almost unlimited capacity. The bestial production of energy that nuclear energy supposes is tremendous. Especially when we compare it with other known types of energy. Yes, nuclear energy generates a quantity of radioactive waste, which is very complicated to make disappear.

Is nuclear energy good or not? Are we for or against nuclear power plants? Since Erenovable, we have published different points of view and opinions about this topic. Greenpeace against nuclear energy while the son of Fidel Castro bets on nuclear energy or that Spain has divided views on atomic energy.

Those mentioned above are just some of the examples but opinions on nuclear power are controversial. Should we be afraid of nuclear energy or not? Should we put aside this type of energy or, on the contrary, should we insist on it? What is the future of nuclear energy in the world? What is the cost of electricity of nuclear origin?

Recently, we have remembered the Chernobyl accident. Therefore, it is very important to define what is the nuclear risk of plants and laboratories. On the other hand, it is also interesting to inquire about the purely technological problem presented by the organization of work and the control of nuclear safety as well as the safety of nuclear installations. The questions are too many and they continue. Who controls the status of private laboratories? What happens with nuclear waste? What is the risk of contamination?

As we know, there are many forms of energy (fossil, coal and gas, hydraulic, solar, wind, geothermal, biomass, etc.). Some of these energies are renewable energies, clean energies and others are not.


If we think about coal, to analyze an example, we will see that the world reserves of coal are sufficient, in theory, to produce all the electricity we need for about a hundred years, approximately. However, it is likely that in the future more and more coal will be converted into more valuable liquid fuels and will no longer be available to generate electricity. Anyway, the coal does not belong to the group of clean energy and is an environmental problem because its combustion releases carbon dioxide and increases the greenhouse effect.

In Western countries, nuclear power is interesting because most nuclear power plants and the cost of electricity remain competitive. It seems to be a very profitable and very active sector. However, the heat generated by the radioactivity of uranium produces approximately 16,000 times more energy than coal, for example. A 100 MWe nuclear power plant consumes the equivalent of 3.1 million tons of coal per year but only 24 tons of unary.
May 17, 2019

Solar panels that use bacteria to generate electricity


A new generation of solar panels that use bacteria can generate electricity even with cloudy skies. The renewable energies are here to stay but still, have their drawbacks. In the case of plates with photovoltaic cells, their viability is compromised in northern areas with little solar radiation.

In addition, although its production costs have fallen and its efficiency has increased greatly in recent years, it is still difficult to find, on average, percentages of conversion of light into energy greater than 25%.

Currently, the main material used is silicon, but that may change in the not too distant future. And it will not be for any material of the latest generation such as graphene, but for one of the oldest forms of life on our planet: bacteria. Specifically, a species with a bad reputation such as E. coli, known for the digestive disorders it can cause.

The new technology is the result of research by a team of scientists from the University of British Columbia (Canada), who have opted for a biogenic approach, that is, using living organisms to generate electricity. For this, they have been based on the dyes that bacteria use to carry out photosynthesis processes.

E. coli bacteria

In previous investigations, it had been tried to extract the dyes to apply them in the solar panels, but the process was expensive and toxic. So scientists have looked for another approach that is based on genetic engineering. Instead of extracting the dyes, they have programmed E. coli bacteria to produce a greater amount of lycopene, the same substance that is responsible for the red color of fruits and vegetables.

After impregnating the culture of bacteria with a mineral that exerted as a semiconductor, they proceeded to spread a layer of the mixture on a glass surface. Later, they found that bacteria were capable of generating electricity even in conditions of very low luminosity, such as the one that reaches us on cloudy days.

The results are extremely encouraging, as they have doubled the amount of electricity generated, from the 0.362 milliamperes achieved in previous experiments to 0.686 milliamps per square centimeter. It is the largest electric current generated by a  biogenic photovoltaic cell, according to Vikramaditya Yadav, the scientist who has spearheaded the project.

Although it is still early to calculate accurately the savings that the new technology can bring, Yadav assures that production would be cheap and sustainable. In addition, the applications of these solar panels from live bacteria could be extended to mining or underwater exploration.

Biosynthesis, green chemistry

Vikramaditya Yadav is one of the pioneers of a new field of research that could have repercussions in numerous areas, not only energy but also pharmaceutical. It is a field that Yadav has baptized as "biosynthetic".

In an article published a few years ago in the journal ACS, the Indian scientist pointed to the diminishing returns in the investigation of new medicines. The new drugs need up to ten years of development and investments of more than one billion dollars. It was time to try new approaches and integrate the benefits of genetic research.

Fundamentally, the biosynthetic is to discover and synthesize bioactive molecules, expanding the spectrum of chemical investigations. This process, which is called "metabolic engineering", allows the synthesis of new drugs of interest to industry and medicine, through the alteration of genes or the metabolic fluxes that occur in microorganisms.
May 17, 2019

Is it possible to convert the Sahara into sustainable energy?


Whenever I visit the Sahara, it surprises me how sunny and hot it is, and how clear the sky can be. Apart from a few cases, there is little vegetation, and most of the largest desert in the world is covered with rocks,  sand, and dunes. The Saharan sun is quite intense to provide the Earth with considerable solar energy.

The statistics are amazing. If the desert were a country, it could be the fifth largest in the world; it is bigger than Brazil and slightly smaller than China and the United States.

According to NASA estimates, each square meter receives, on average, between 2,000 and 3,000-kilowatt hours of solar energy per year.

Since the Sahara has an area of ​​about nine million square kilometers, this means that the total available energy - that is, if every centimeter of the desert absorbs every drop of solar energy - is more than 22,000 million gigawatt hours (GWh ) year.

Solar park


This is a figure that requires context: it means that a hypothetical solar park that covers the entire desert would produce 2,000 times more energy than the largest power plants in the world, which only generate 100,000 GWh per year.

In addition, Sahara also has the advantage of being very close to Europe. The shortest distance between North Africa and Europe is only 14.4 kilometers in the Strait of Gibraltar.

But, even if the distance were greater, through the widest part of the Mediterranean, it would also be possible to transport energy. After all,  the largest submarine cable in the world travels about 600 kilometers between Norway and the Netherlands.

Over the past decade, scientists (including my colleagues and I ) have investigated how desert solar energy could meet the growing local demand for energy, and finally, supply Europe as well, and how it could work in practice. And these academic ideas have materialized in rigorous plans.

Desertec

The main attempt was Desertec, a project announced in 2009 that quickly acquired a significant amount of funds from several banks and energy companies before collapsing when most investors withdrew five years later, citing high costs.

These projects are held back by a series of political, commercial and social factors, including the lack of development in the region.


Among the most recent proposals is the TuNur project in Tunisia, which aims to supply energy to more than two million European households, or the Noor Complex solar power plant in Morocco, which also intends to export energy to Europe.

At present, there are two specific technologies for the generation of solar electricity in this context: solar energy by concentration (CSP) and conventional photovoltaic solar panels. Each has its advantages and disadvantages.

Concentrated solar


Concentrated solar energy uses lenses or mirrors to focus the sun's energy at a single point, which becomes very hot. This heat generates electricity through conventional steam turbines. Some systems use molten salt to store energy, which also allows electricity to be produced at night.

The CSP seems to be the most suitable for the Sahara due to direct sunlight, lack of clouds and high temperatures, which makes it much more efficient.

However, lenses and mirrors could be covered by sandstorms, and the turbine and steam heating systems remain complex technologies. But the most important drawback of this technology is that it would make use of hydraulic resources that are scarce in the desert.

Solar photovoltaic

Photovoltaic solar panels, on the other hand, convert the energy of the sun into electricity using directly semiconductors. It is the most common type of solar energy since it can be connected to the electricity grid or distributed for small-scale use in individual buildings. In addition, it provides reasonable performance when the sky is cloudy.

But one of its disadvantages is that when the panels get too hot their efficiency decreases. This is not recommended in a part of the world where summer temperatures can easily exceed 45 ° C in the shade.

Keep in mind that the demand for energy for air conditioning is higher during the hottest hours of the day. Another problem is that sandstorms could cover the panels, further reducing their efficiency.

Both technologies need a  certain amount of water to clean the mirrors and panels, which makes water an important factor to consider. Most researchers suggest integrating the two technologies and developing a hybrid system.

A small part of the Sahara could produce as much energy as the one currently produced by the entire African continent. As solar technology improves, production will be cheaper and more efficient. The Sahara can be inhospitable to most plants and animals, but it could produce sustainable energy to keep all of North Africa and beyond alive.
May 17, 2019

What is blue energy?


Much has rained since the middle of the last century when Professor Pattle ventured that the saline differential between the compositions of the salt water of the sea and the fresh water of the rivers at its mouth could generate a variation of osmotic pressure called blue energy.

That is, by separating both liquids by a special semi-permeable membrane, the fresh water would naturally flow into the chamber with salt water to decrease the salt concentration. By maintaining the volume of this fixed chamber, the pressure on this side of the salt water would increase and could theoretically be used to move a turbine that, in turn, would generate electricity.

And we are not talking about a testimonial production: there are studies that indicate that, if it were used, it could cover 80% of the world's energy needs. Until then, the eureka moment arrived, because, at that time, there was no technology that could take advantage of such an energetic torrent. It was necessary to go into practice.

In the year 73, an American professor named Sidney Loeb, inspired by the differential salt differential between the Dead Sea and the Jordan River, developed a membrane system based on delayed osmotic pressure, the so-called PRO system, which uses developed membranes ad hoc to apply the principle pointed out by Professor Pattle.

Very high cost

The problem was that the cost of manufacturing the membranes was prohibitive. It would take years for this to be reduced significantly.

Finally, in 2009, a plant with this technology was inaugurated in Tofte (Norway). However, it was also not the time to uncork the champagne: it barely produced 10 KW of electricity, with a yield of 1 W / m2.

In addition, due to the bacterial action, the orifices of the membranes were clogging and losing effectiveness. In 2013 they decided to close the project. It was time to try another complementary technology.

It was Loeb himself who, four years after developing the PRO system, announced the technique of reverse electrodialysis (RED). This time, instead of taking advantage of the water pressure, the focus was to take advantage of the positive and negative charges in a body of salt water and another of fresh water, separated by a membrane with an applied electric current.

With this system, it is the salts that pass through the membrane in such a way that one side allows only positive ions to pass to the cathode, while the other only passes the negative ones towards the anode. From there, the movement of the electric charge is generated and exploited.

The first plant with the RED system was inaugurated in Holland in 2014, thanks to a project by Wetsus, the Dutch Water Institute, in Leeuwarden. stack, the resulting company, has been working on electricity generation since then, with a production of 50 KW.

Nanotechnology comes to the rescue

However, new advances point towards a truly efficient and competitive system to capitalize, once and for all, the energy potential of osmosis.

The key was to reduce the size of the holes for the ions to cross the membrane at an atomic scale. Thus, at the end of 2016, the development of a new molybdenum disulfide membrane of three atoms in thickness with the potential capacity to produce 1 MW per square meter was published in Nature.

That is, that surface could light 50,000 low-energy bulbs. Apart from its low cost, another advantage of this material is that industrial plants would not be needed, but that the membranes could be placed directly in the estuaries of the rivers to generate electricity.

Wednesday, May 15, 2019

May 15, 2019

Applied science (1): Quibim, the hidden data after an image


I remember that a few years ago the Nature Neuroscience Magazine revealed in its always interesting Focus section a very significant fact: In the last five years, the number of scientific articles describing new neuroimaging techniques had increased by fifty percent. That editorial of Nature was written in 2013 and since then the development of this field has done nothing but grow, offering almost immediately innumerable applications in research, early diagnosis or more effective treatments for the patient.

Engineers, software developers, doctors or mathematicians are part of numerous research teams that are reaching levels of innovation unthinkable just a few decades ago in the field of image analysis.

In this new series in the Scientific Culture Notebook of the UPV / EHU, I will contact Spanish scientists who are working on internationally recognized projects and whose work can serve as an example of how basic research turns into applied science at the service of society ... The first interviewee is Ángel Alberich, founder of the QUIBIM ( QUantitative Imaging Biomarkers In Medicine ) company, focused on the analysis of biomarkers using image processing techniques.

One of the main reasons to contact Ángel is his recent choice by the prestigious MIT among the Ten Innovators under 35 Spain 2015 in the Spanish version of his Technology Review.

With only 31 years, this young engineer of Telecommunications and Master in Biomedical Engineering, married and with a daughter, manages daily to combine family with his work as scientific-technical director of the Biomedical Research Group in Image of La Hospital Fe de Valencia, as well as the biotechnological image processing company that he founded just over three years ago.

Quibim is a good example of the process that takes us from basic research to the application of this acquired knowledge and its implementation as a service accessible to society.

In the first stage, all the effort falls on the investigation and as the own Angel indicates: "During all that time we were developing a series of algorithms of image processing that we were published by means of scientific articles in specialized magazines. For example, a new method of image analysis to obtain information on cellularity, or a new algorithm to see the resistance of the bone or to detect the fat present in the liver "

Over the years, the results of this research are grouped and organized into a set of tools and algorithms that can be offered as a service for any hospital, doctor or research project that needs to extract certain information from a biomedical image. Thus Quibim was born and " just as clinical analysis laboratories analyze blood, we are dedicated to analyzing images ".

To learn more about his work Angel gives me a very graphic example: Imagine that a doctor observes an MRI of a patient's liver and appreciates a hypointensity. Well, we analyze that image and measure the biomarker you need, as the exact percentage of fat in that liver.

The imaging techniques offered by Quibim are based on two types of modeling, structural and dynamic algorithms. On the one hand, structural modeling allows us to extract volumes, shapes, textures or intensities and quantify these biomarkers very precisely. On the other hand, dynamic models can modify some value or property in the image, such as the passage of time, and in this way, they are able to perform a mathematical model to follow a tumor or any other injury over time.

A few months ago, Quibim's work received the Best Oral Communication Award presented by a partner under the age of 35 at the Congress of the Bone Research and Mineral Metabolism Society (SEIOMM) for the presentation of image processing of the bone structure of bone. a trabecular bone, which you can see in the image above.

To explain it with simpler words: Architects and engineers know well what the risk of a fracture is, this is not only applicable to materials, metals or beams but is equally important in a bone. Currently, the patient only has densitometry (a measurement of density) and little else ... and that is not enough, we need a microstructural study to know exactly how these beams and pillars of the bone are.

The company was founded in 2012 and currently works with doctors, clinics and especially research teams that develop new drugs and that need to extract from these images precise data of the different biomarkers they are working on.
May 15, 2019

Without science there is no progress


In the #sinCiencia campaign that takes place these days in different media on the internet, different arguments are advanced. The most frequently refers to the relationship between science and health, and between science and quality of life, or science and progress.

The relationship between scientific research in the biomedical field and health is evident. It is also what is between research in other fields of knowledge, typical of physics and chemistry, and technological development. In short, scientific research, and increasingly, is the condition for significant progress to occur in key areas for human well-being.

And yet, I believe that many of these arguments, as they are formulated implying a direct and relatively close relationship in time between research and technological and health development, are not the most adequate to raise awareness among the population in general of the interest of the science and that it is worth investing in it. I will explain below where I see the weaknesses of those arguments.

In the first place, there is a disparity of opinions about whether or not there is a relationship between investments in science and competitiveness. There are important specialists in development economics who argue that there is no solid evidence of that relationship, and I suspect they are right. To explain why, we should dive into the stormy waters of the development economy, and I will not do it, because it is not my specialty. However, I will summarize the argument I have read there: competitiveness, even when linked to innovations, depends more on factors related to trade (freedom of movement of goods and ideas), institutional environment (favorable or not to risk) and incentives, rather than policies aimed at promoting R & D and innovation.

Another weakness of the majority argument is that they easily lead to proposing that basic research be ended, because if we investigate to generate wealth, then we should park or directly abandon an investigation that can be very expensive and we do not know when it will be obtained. benefit, where it will be obtained and, not even if it will be obtained. The American tea party promoted this position in the elections to the Congress and Senate last year, and I have no doubt that it would be a very appealing argument for the anti-enlightened sectors that abound in the Spanish political and media cavern.

With the arguments that defend the relevance and convenience of investing in science to live better, there is a third additional risk. It is the recourse to Unamuno "that they invent." And it would be a solid argument. Let's see. If it is accepted that each country has its sources of wealth and that it makes use of these sources to progress, depending on its own aptitudes, there would not be too much difficulty in proposing that in the same way that the Norwegians have oil, the Americans technology and software, the Italians design and Brazilians fruits, there will be those who propose that Spain could exploit tourism and solar energy, for example, and that if it manages to do that well enough and at a low price, the problems would be solved [yes I know I exaggerate, but this it is being proposed in fact in low-fat version] In other words, what is suggested is that we dispense with knowledge that is not directly related to our own "natural" sources of wealth.

And my fourth reason against "utilitarian" arguments in favor of science has to do with the fact that their abuse, in my opinion, has discouraged interest and scientific vocations among young people during the good years, and It has undermined the foundations of genuine discourse in favor of science. For reasons of space, I will omit additional explanations and refer the reader to future writings in which I will develop this idea in more detail.

The practical usefulness of science should not be denied, of course not, nor the convenience of betting on it if the intention is to develop economic sectors that are intensive in scientific knowledge. And of course, it is clear that certain developments and innovations would never happen if certain scientific fields were not cultivated. But the supreme argument in favor of science, in my opinion, is another and much more complex.

The scientific activity, in general, and the scientific culture that emanates from it, contribute to creating a climate of excellence that spreads to the whole of the social body. That climate ends up coming, as by osmosis, to all sectors and activities and promotes that things become "good". The main mechanism of dissemination is training, and more specifically, the university level. University education has to be nourished, not only in the scientific fields, of research, and thus incorporates the critical component that should truly qualify university graduates. This critical component is an indispensable condition of good high-level professional performance and, therefore, of the good functioning of society as a whole. And that good functioning ends up having positive effects in all social areas,

But there is more. Science has certain values, which are what have made it a method of acquiring such powerful knowledge. Democratic societies share those same values ​​with science. And they share, in a certain way, their way of progressing. These values ​​(optimism, tolerance, skepticism, and humility) are the values ​​that adorn democratic societies; they are essential for scientific progress and for social progress, and their cultivation through science has systemic effects. In Popper's words, both in the open society and in scientific development, the conjecture-refutation sequence has turned out to be extraordinarily fruitful. It is in science; We already know that we are dedicated to it and we have seen how some notions (sometimes ours) have been refuted and replaced by others.

It is not by chance that the societies that have made the greatest effort in the development of science are the ones that work best. And it should be noted that the fertile effort, which has been successful, is that which is maintained over time. Therefore, reductions in scientific activity, although they are intended to be only transitory, can have devastating effects on the system and on the medium and long-term development of a country.