The Relationship between Complexity and Energy

One of the main important assumptions under the thinking of “Beyond Innovation” (the last book by me that can be found at Amazon store) is to consider the existence of an energy linked to the complexity of the system. This fact can be demonstrated mathematically through the Lagrangian.

This can seem odd to an economist, but for a modern physicist there is not a difference between physics and geometry. Einstein established a relation between gravity force and the curvature of a manifold known as spacetime. A manifold is a geometric object. Before Einstein space was a simple relationship between two masses. We could imagine the space as a void between two masses. It there are not masses, there is not space. However, modern physics is completely different. Spacetime is a geometric object with physical existence and its shape defines how objects move on it, but its shape is not fixed. Spacetime is elastic and its shape changes when it interacts with masses changing its curvature. The Lagrangian function is defined through the curvature of the spacetime. It is simply an additional geometric restriction and minimizing the action implies that the path followed by any body is a geodesic curve on the manifold.

In quantum physics the working process is similar. Physicists analyze symmetries that are geometric properties in order to characterize a quantum system. A symmetry implies a quantity that is preserved after a change. In terms of complexity, a symmetry implies structure and this structure is represented by information that is preserved after the change. This information preserved lets us to define the conservation laws.

If we analyze a economic system as a vector space, we can define several restrictions that would be in that framework always geometric ones. And as an economic system is a real system it must be subject
to the physical laws and then it is possible to define a Lagrangian function for it that will follow the principle of minimal action.

The Lagrangian functions are usually applied to the resolution of minimization problems. It is interesting to notice that they are used to calculate the minimization of some functional subject to
constrains.

Let be
the Lagrangian of the economic system for with a vector of state variables x.

The action will be defined by:

with

(kinetic and potential energy)

If we have m additional dependent variables. The will be related
through equation in the form:

We can define a new Lagrangian from a set of constants

and its action will be:

From the Euler-Lagrange equations we get:

The first equation is transformed into:

and due to

the last equation is

As

does not depend on


we can see that


and

From where we can identify any geometric restriction for the state space with dimension N as equivalent to an storage of potential energy in the state space with dimension N+M.

As

we can affirm that a restriction diminishes the total energy of the system. A system with restrictions has lower disposable energy than a free system.

Now we can look at the reversal problem. If we have a system where we do not know its real dimension N and we are analyzing it with a state space of N+M variables we can use a metric of complexity in order to determine its stability C.

As C measures the existence of structured information, C is measuring the number of different relations between the N+M variables and this value is equal to the different relations between the independent N variables because they are a different description of the same system. As we could see those are defined by the Euler-Lagrange equations and the constraints. Then, the number of equations is equal in both cases and

The number of relations are those ones defined by the state equations and the geometric constrains. The existence of additional complexity over the dynamic equations implies the existence of geometric restrictions and then the existence of complexity. This additional complexity can be linked to the potential energy

In a physical system the Lagrangian does not depend on time. It defines the structure of the system. The structure of a planetary system can be perfectly expressed through the Lagrangian or the Hamiltonian functions that are in fact geometric restrictions in a state space.

 Azul Gris 0002Mr. Luis Díaz Saco

Executive President

saconsulting

advanced consultancy services

Nowadays, he is Executive President of Saconsulting Advanced Consultancy Services. He is an Industrial Engineer (EQF 7) expert in Automation, Computer Vision and Robotics. He has been Head of Corporate Technology Innovation at Soluziona Quality and Environment and R & D & I Project Auditor of AENOR. He has acted as Innovation Advisor of Endesa Red representing that company in Workgroups of the Smartgrids Technology Platform at Brussels collaborating to prepare the research strategy of the UE.

Information is not power but knowledge is it

In the era of big data, most people think that information is the final source of power; however, this is an erroneous concept. Wrong concepts drive to bad actions and finally to a loss of real power.

Information can be seen a set of data samples about a phenomenon. Knowledge is much more than this. Knowledge should be provided by a model of behavior of that phenomenon that let us to make predictions about the future. Power resides in the anticipation of the future that let us to take advantage of it.

This is not a new concept. Bertrand Russell wrote about how Chinese emperors protected Jesuits because they could predict better eclipses than their astronomers and this fact could be used to show the power of the Empire and its ruler to the people.

From ancient times, even political power was linked to knowledge, although for common people it was related to magic instead of science.

Espionage services are known as intelligence services for the same reason. Although the hard work of espionage is gathering hidden data, what provides value to a government is not the raw data but the processed data in order to make right decisions. Intelligence can be seen as the process of simplifying a large set of data in order to anticipate an action that provides us an advantage to reach some desired target.

In the era of big data this is valid again. The role of the data scientists is basic in order to take advantage of the gathered information by a system working with big data. However we are in the beginning of this era and the systems processing a large amount of data are very naïve yet, although this technology is increasing its performance fast day by day.

There are several things to be considered when we are working with a large amount of data:

  • We need to know the limits of information processing.
  • We need to know the issues related to the quality of information.
  • We need to know the effects of the noise in the data.

Gathering data do not provide itself a model of reality. This was scientifically demonstrated through the Nyquist’s theorem that shows that in order to recover a periodic continuous signal it is necessary to sample it at the double of the higher frequency of the signal. In simpler words, in order to get a perfect knowledge of a periodic phenomenon it is necessary to gather certain amount of data. This theorem goes farther and shows that sampling the signal at higher frequency cannot provide more information about it. In simpler words, getting more than the required information about a phenomenon will not provide more knowledge about it.

This is important in big data related systems. The knowledge about a phenomenon will not be improved increasing the gathered data when we have got a perfect model of it yet. Many companies can be selling nothing around the gathering information business.

Another important aspect is that models are a source of uncertainty because we need to make assumptions about reality in order to create them.

First of all, most statistical analyses try to simplify any phenomenon supposing that the problem is linear, however, this kind of simplification cannot be assumed many times because reality usually is non-linear.

When data scientists try to model a system, they usually try to use polynomials. Even if the system is really polynomial, in order to recover the system we need to know the degree of the polynomial. We will not get better results trying to fit a polynomial of higher degree to a set of data. The errors that can provide some kind of action can be huge.

In other words, the knowledge about the physics of the system will help us to calculate better a model of the system that try to use only computing brute force. Knowledge about the physics of the system is always better than the “stupid method” of the computer programmer.

Another issue to be considered is the following: Thinking only in Nyquist’s theorem we can recover a periodic signal although the data is biased if we sample the signal at the correct frequency. This is true, but, the reason is that we are applying scientific knowledge of physics instead of the “stupid method” of the computer programmer. We assume that the signal is periodic. However this is not true if we have not that knowledge about the signal. If the signal was polynomial, the bias can be important.

Imagine a system that we know it can be modelled as a polynomial. That kind of signal is not bandlimited and Nyquist’s theorem cannot be applied. If we only have data from the left side, all of them will have the same sign (for instance positive), and the sign of the right side will depend on the degree of the polynomial. If it is odd, the sign will be contrary but it if is even, the sign will be the same.

Figure 1

Figure 1 The same set of three numbers can be interpolated through the functions x2 and –x3/3 -2x/6

This fact is an example about the quality of the data. Not all the sets of data have the same quality to provide a model of a system. In order to make a good interpolation, it is required that the pieces of data are distributed uniformly on the entire system domain.

In the era of the big data, this is especially important, and many times is not well analyzed. Although I can think about some example related to social affairs, I will try to be politically correct and I am going to put an example about myself. Imagine that a job seeker company is trying to classify me through a big data system in order to determine which job is better for my profile. I have been working in a pure science research center as a researcher and I have been working in a quality engineering company as a manager. The first fact would say about me that I am interested in creative positions trying to provide new things for mankind out of the standard way of working; however, the second one would say about me that I am an expert in standardization of tasks and I would be interested in defining and following rules.

That profile is too complex to be included in an easy classification, and it would be even more complex if we continue advancing along time. Any biased information to the left or the right of the temporal axis would drive to an incorrect classification. In this case, it is important to analyze the whole profile and to have a more complex classification method for more complex profiles. One of the reasons for big data analysis is the analysis of more complex situations. The more complex the system is, the larger the amount of required data is, and the more complex the analysis procedure is. As in the example of espionage and intelligence, large scale analysis requires large scale intelligence. That is the reason why the development of artificial intelligence has become fashionable in the era of big data.

The immunity to noise is very important when we are trying to define the model of a system. What happens when there are pieces of incorrect data inside the gathered data? Imagine a linear system and an analysis algorithm that tries to determine the order of the system that fits better the data.  A little shift of a few data can produce a result of increasing the order of the system that fits perfectly the data. In any system for analyzing data, we need some kind of method to filter the data trying to avoid the negative effect of noise.

Again this is not a definitive solution. Filters would eliminate special cases. Filters would search for a mean behavior eliminating the data far from that mean behavior. This kind of solution applied to a complex system could not be good enough.

From this point, we can understand that complex systems are not easily modelled. The use of generic modelling techniques for complex systems under great uncertainty can drive to very bad results.

Knowledge is power, but we must be assured that we have got the proper knowledge. Scientific knowledge advances along time and some people produce great leaps in the knowledge owned by mankind that improves our technology and our lives. Any model has a domain of application, out of it, it cannot be considered knowledge. Newton’s physics about space and time can help us to determine the time we will spend in a travel by car but without the Einstein’s physics we could not have satellite communications. Namely, limited knowledge provides limited power. The reason for basic research is a matter of power too. A better knowledge provides a higher power to mankind.

Scientific knowledge is linked to the limits; however, the use of computer algorithms without taking the limits into account drives to very wrong decisions.

Back to the previous example, including the big data and the social networks, I have a lot of contacts in the quality business, most classification algorithms would produce an analogy between my profile and the profile of these contacts, however, although it is true that I have got a great knowledge about standardization and managing methodologies like them, my role inside a quality organization would totally different because I was an innovation manager. Although the role of a quality expert is to provide uniformity in the procedures in order that things can be done in repetitive and secure way increasing the capability of the managing staff to assure the goodness of the provided product and services, the role of an innovation manager is to drive changes in the organization in order that it can provide new products and services. Classifying people through a single common word does not define the different role of different persons in an organization. This cannot be done simply counting the number of links among different kind of people.

On the other hand, the risk of assuming different things as equal in order to simplify the analysis process in a computer program is not economic. Day by day many businesses are driven to the majority because through internet it is possible to provide expensive services at a low price due to the huge volume of market that compensate the required investment. If there are a few unsatisfied clients due to an incorrect segmentation do not produce a great economic loss, and providing different products for them could not be so profitable.

The risk is related to knowledge and power. In a global market companies will get more incomes from mediocrity than investing in improving the quality of their products. High quality products would never be interesting if educated people are wrongly classified. Always there has been a different demand by common people and educated one. Think for instance in music. Cult music, commonly named classic, has existed from many centuries ago sharing the time with the popular one. In the previous centuries, writing cult music provided much more incomes than writing popular music because a rich man could pay much more than a little group of poor people. Nowadays, the situation is contrary. No man can pay as much money as a musician can get selling his song on iTunes at the economic price of one dollar all over the world through internet. The result is obvious. Everybody knows now who was Mozart and he was famous when he was alive, but nobody knows current musicians writing cult music, and everybody knows the name of many musicians writing popular music that are in touch with the higher social classes due to their huge earnings.

The question arising here is: If technical and social improvements are driven by advanced knowledge provided by selected people, can the future society improve in a world built for mediocrity? There is something that can help to avoid this: competition. Man reached the moon due to the space competition between USA and USSR instead of a demand of people to do it. Competition among companies can produce the search for better products and services for people that do not demand them although market trends can be pointing to mediocrity. Competition among companies can produce better tools for classification and market analysis, and it can impulse the customization of solutions for different kind of people because a better segmentation reduces the cost of that customization process.

Competition is what will be demanding new and better products and services and will make mankind advance after the era of the big data. A society or organization where competition, making new things, or enjoying with different thoughts and solutions than majority are penalized will be a society with less power driven to be stuck into mediocrity.

In order to finish this discussion, I would propose that you take complexity analysis techniques into account in order to cope with a huge amount of data where traditional statistics and classic modelling techniques cannot be effective enough. This techniques can provide knowledge about the structure of the system that will be useful to increase our capability to make better decisions.

 

 Azul Gris 0002Mr. Luis Díaz Saco

Executive President

saconsulting

advanced consultancy services

 

Nowadays, he is Executive President of Saconsulting Advanced Consultancy Services. He has been Head of Corporate Technology Innovation at Soluziona Quality and Environment and R & D & I Project Auditor of AENOR. He has acted as Innovation Advisor of Endesa Red representing that company in Workgroups of the Smartgrids Technology Platform at Brussels collaborating to prepare the research strategy of the UE.

 

The political correction of innovation management

Some years ago, I attended a meeting of the managing working group of the Spanish Electricity Networks Technology Platform for the first time. One of the main issues to be discussed was the need of asking the government for better support for power electronics engineers as an important matter to provide the grids with new smart devices. In a certain moment of the meeting, one of the attendees looked at me and said to everyone there that perhaps we should not make a great effort in that point of the meeting when electricity companies hired managing engineers instead of electronic ones. I was surprised because I am electronic engineer by profession although I have additional education in business administration. However, he was right, if you look at my linkedin profile, you could think easily that is the profile of a politician instead the profile of an innovator.

I have done the military service, I have been hired by the public administration, my photograph was published in many newspapers when I was twenty eight years old because I was working in projects to improve the life of handicapped people, and even that work was referenced at all the national press and televisions. I have collaborated in the development of regulation and standards, and I have been lecturer in economic forums for competitiveness, and I have been attending international meetings for the economic development of the UE.

With this curriculum, probably I would have been an extremely good candidate to be president of the USA. In fact, most of the Spanish MP’s have not a so good curriculum for its activity because, in Spain, we are not as exigent with our politicians as the American people.

It can be true that my profile seems to be the profile of a politician instead of the profile of an innovator; however, there is a good reason for this: I have been an innovation manager at large corporations instead of innovation manager at start-ups.

I would be a good political candidate but I have not got the support of a political party because I have never searched for that kind of support. On the other hand, I would be a good engineer making innovative devices and products, at least I was it when I was younger, but I have not the image of an innovator because I have been dedicated my career to management.

This is not a godsend. As the attendees at the technology platform meeting, people will expect that you confirm their expectations. If you seem to be politically correct, people are expecting that you have got the support of a strong organization as a regular army officer, and you will find in front of you a handful of competitors with their very strong organization behind them. If you seem to be an innovator, people are expecting that you seem politically incorrect, having the strength and the willpower to convince people to adopt a very different idea to the commonly accepted by the status-quo in the guerrilla people way, undermining the arguments of competitors one step at a time.

There is a great difference between what we can name inner and outer innovation. The competences required to drive innovation are different if we want to introduce a disruptive innovation in a market or if we want to change the customs and strategies of a large organization.

This can be analyzed from a complexity viewpoint. Organizations are, due to their own nature, very structured, however, markets are much less structured and their behavior has much more uncertainty. To drive innovation inside a large organization is similar to fight in an open field (where you can see the resources of the competitor). However, to introduce an innovation in a market with a start-up is similar to fight in a jungle. You can take advantage of your competitor cannot see where you are, and how many resources can be used against its market position.

Large corporations are searching for “politically correct” innovation managers because large corporations are defining what must be considered politically correct in their markets, however, startups will require the opposite profile. A startup with a disruptive product will require a “politically incorrect” manager, someone who knows how to compete against the powerful structures of large corporations with fewer resources.

If we analyze the current economic situation, we will be able to see that the complexity of the worldwide economy has been increased in the last years. This should boost the role of innovation management for two reasons. “Politically correct” managers would be valuable in order to increase the strength of large corporations driving changes of large companies to increase the resilience of their businesses. On the other hand, “politically incorrect” managers would have their own opportunity because as market uncertainty increases there is more room for societies to adopt great changes and better conditions in order that startups be more competitive.

 

 Azul Gris 0002Mr. Luis Díaz SacoExecutive President

saconsulting

advanced consultancy services

 

Nowadays, he is Executive President of Saconsulting Advanced Consultancy Services. He has been Head of Corporate Technology Innovation at Soluziona Quality and Environment and R & D & I Project Auditor of AENOR. He has acted as Innovation Advisor of Endesa Red representing that company in Workgroups of the Smartgrids Technology Platform at Brussels collaborating to prepare the research strategy of the UE

 

A Frequency Analysis of Economic Complexity (II)

The altered peaks and troughs of the sinusoida...

A non sinusoidal waveform displayed on a oscilloscope. (Photo credit: Wikipedia)

In a previous article, I discussed the possibility of using mathematical models in order to predict the evolution of the economy of a company or a country. In this article, I want to continue that discussion from a more real case. In the former case, the benefit function was the integration of three different frequencies, trying to simulate the effect of periodical monthly, quarterly, and yearly events, however, the work was done with single-frequency signals but in a real case payments are done in single packets instead of being distributed through a sinusoidal wave.

In order to work with a real case, I will simulate the evolution of a little business with the following behavior:

          A single employee that receive his salary the last day of the month.

          A single client that pays every three months the first day of the quarter.

          A margin of thirty percent for the incomes.

          A single payment of taxes on benefits of twenty five percent, the last day of the year.

And a second case with the same quantity of incomes but perceived daily.

The objective of these two cases is analyzing the evolution of the effect of the uncertainty on a possible model.

The benefit function of the former case would have the following aspect:

Generated Periodic Benefit Function in the domains of time and frequency. Trimestral case

Graph 1 Generated Benefit Function in the domains of time and frequency. Trimestral Case

As we can see the signal is not totally band limited but most of the energy is concentrated in the low frequencies as it was expected, the reason is that incomes and expenses are done instantly and the frequency response of the kind of single (impulse) is flat all over the spectrum.

The second example is similar but in that case is incomes are constant, the frequency response of them produces only energy at zero frequency. The signal has more energy at the low frequencies and less energy at the higher ones, if we remember the previous article, this fact implies that the business should be much more robust and predictable.

Generated Periodic Benefit Function in the domains of time and frequency. Daily case

Graph 2 Generated Benefit Function in the domains of time and frequency. Daily case

In order to show you that fact, I am going to introduce the effect of the uncertainty. I am going to consider that the costs are fixed (salaries usually are) but the incomes proceed from a market with high uncertainty, although the mean value of our incomes is preserved, they can change up or down a fifty percent (a very high uncertainty). We can see the effect of that kind of uncertainty related to incomes at the following graphs.

 Original and noisy signals for trimestral and daily incomes

Graph 3 Original (blue) and noisy (green) signals for trimestral and daily incomes

This graph shows that the daily incomes reproduce a more similar behavior between the theoretical situation and the uncertain one, as we expected from the difference between low and high frequencies.

On the other hand, it is interesting to analyze our capability to make a model of both businesses. In order to do this, we are going to try to look at the differences between the real behavior and a model extracted from only low frequencies. In order to do this, we will filter the signal at the frequency domain with a low pass filter, and we will compare the result at the time domain with the initial signal. Thinking about Nyquist’s frequency, and our most frequent event is every thirty days, Nyquist’s frequency should be events every fifteen days however, we know that the signal will not be recovered exactly due to the harmonics of impulse inputs.

Noisy signals and filtered noisy signals at a frequency of events with a period of fifteen days

Graph 4 Original noisy signals (blue) and filtered noisy signals (green) at a cutting frequency of events with a period of fifteen days

Additionally, I am going to include some harmonics increasing the cutting frequency for events produced every six days (near a week).

 Noisy signals and filtered noisy signals at a frequency of events with a period of six days

Graph 5 Original noisy signals (blue) and filtered noisy signals (green) at a cutting frequency of events with a period of six days

 

As we can see it is possible to recover the behavior of the business, and it seems possible to make a mathematical model of this system if we obtain data of the business every week, and visually it is better the result of the second case.

I know that this discussion is not done in the language of economists, and that is the reason why now I am going to translate it talking about the conclusions.

Most businesses are much more complex that these ones, but we can make some analogies, for instance, the initial model can be similar to the way as a national state works. It pays salaries every month and gets incomes from VAT every three months and finally yearly from tax on rents and societies. Of course there are many other expenses of different periods that would not be considered here. The second example is more similar to great private companies of consumption goods, it gets incomes any time and it pays periodically. From this analogy we can find that private management is more robust (in the complexity sense) than public one in most cases, namely, the way that they get their incomes make them less exposed to unexpected events (as showed Graph 3).

Another conclusions that we can see, is that increasing VAT is better than increasing other taxes as taxes on rent or societies with higher periodicity in order to improve the manageability of public accountancy. Then, this is not a happy idea of officials at Brussels, this can be demonstrated mathematically.

Looking at Graph 5, when we cannot discriminate well between signal and noise (between our capability to get incomes and the uncertainty of the markets) we cannot get a model to predict the future. We have modeled well the past behavior of the first case, but it does not imply that we can extrapolate it to forecast the future (see Graph 3). With the second case, the effect of the uncertainty is integrated and we could get a more reasonable model.

As many people know investments in highly volatile markets as stock exchange only following the expected future results of the business has mathematical sense if it is done thinking on the long term, because the information of the business usually is sampled quarterly instead of daily, however, the stock exchange has the advantage of letting us to adjust our investments daily to every event that we can receive from the economic press.

In the same way as daily incomes produce more robust businesses even under an uncertain scenario, having the capability to act on the costs (with a more flexible labor market) can produce more robust business, again, this is not a happy idea of the officials at Brussels, this is perfectly shown through mathematics, but on the other hand, a problem of public debt should be approached making more flexible the public costs and the staff of the governments, instead of acting only increasing taxes. This has the same mathematical sense but it is not usually considered by any public official at Brussels or at the national states.

 

 

 Chaqueta Azul Azul Lagrimas 0003

 

Mr. Luis Díaz Saco

Executive President

saconsulting

advanced consultancy services

 

   Nowadays, he is Executive President of Saconsulting Advanced Consultancy Services. He has been Head of Corporate Technology Innovation at Soluziona Quality and Environment and R & D & I Project Auditor of AENOR. He has acted as Innovation Advisor of Endesa Red representing that company in Workgroups of the Smartgrids Technology Platform at Brussels collaborating to prepare the research strategy of the UE

 

How Complexity Can Affect the Performance of a Company after a Merge

English: Pictogram for Merge.

English: Pictogram for Merge. (Photo credit: Wikipedia)

Merges are usually analyzed from financial parameters, you can look at the cash flows and you can easily determine if the result will produce a positive or negative cash flow. With this approach you can determine if the result will continue providing benefits. The value over the individual companies that a merge produce is usually due to a later process of rationalization. This is the common name that is used to designate the simple process of eliminating the duplicate structure. As you are reducing structure, you are reducing complexity, and then, the value provided by a merge is due to a reduction of complexity. If you have the salaries of the managing staff you can determine a priori looking at the accountancy the benefits that the merge will provide.

This is the case of a merge of similar companies in the same market, but other merges can be done following strategic objectives of internationalization, vertical integration and so on. In that cases, there is not a benefit due to a process of rationalization, however, the benefit is provided by a reduction of complexity too.

Complexity of Gas Natural Fenosa in 2009

Graph 1 Complexity Map of the Spanish Company Gas Natural Fenosa in 2009

In terms of complexity, not all the structure of a company is provided by managing staff. Structure is defined by relations among different agents at the company, and the most important structure is due to the physical productive process itself. Many decisions about the business are fully driven by the physical productive process. In the case of vertical integration, for instance, you are making easy to get (and provide) raw materials for the productive process avoiding the trading process and the uncertainty introduced by the markets. This is another way for the reduction of complexity. You can have created a new strong link between activities, but you have eliminated many others. The new link has much less uncertainty associated than a trading process.

GNF Complexity Evolution

Graph 2 Effect of the acquisition of the Spanish company Union Fenosa by the Spanish company Gas Natural

A merge of this kind is illustrated by the previous Graph. In this case, the buyer was showing a behavior near collapse (sharp reduction of complexity) that could be corrected through and acquisition of an electricity company (vertical integration) increasing its complexity but increasing the maximum complexity and resilience too and showing a typical increasing behavior.

But we know that not all the merges has a successful result. The reasons are related to the complexity of the companies. Although on the paper we can determine the cost cuttings of the rationalization, the process in the real world is not so easy. There is additional structure that has not got an accurate reflex on the accountancy, for instance, knowledge, or culture that can be not easily duplicated. This is the reason why in many cases, the managing staff of a smaller company acquired by a great corporation can finally get the control of the whole merged company, this should not be a real problem for shareholders, but many times, it is not possible to eliminate duplicate structure due to cultural differences, and finally the merged company have turned a healthy company into a massively complex one that is driven to fail.

For instance, the inflexibility of the labor market can produce undesired effects on a merge. Imagine the case of an innovative and dynamic company merged with an old big elephant. In this case, the staff of the latter can be overprotected and the pressures of the markets to carry out the rationalization can eliminate the staff of the former who could be the strategic reason for merging both companies, trying to modernize the latter one.

This example can show that if you analyze the companies looking only at the accountancy, the results can be a great disaster.

The use of complexity measures to simulate the merge is an advance because complexity can be measured through physical and financial variables simultaneously and even including macroeconomic variables to analyze the effect of the environment. With these techniques you can try to determine where the labor legislation can be more contributing to complexity although you cannot know the contracts of every people.

Contribution to Complexity at GNF

Graph 3 Contribution to the complexity of different physical, financial and macroeconomic variables of Gas Natural Fenosa in 2009

The previous graph is showing that a macroeconomic variable as the index of prices (IPC España at the right side of the graph) has a strong influence on the complexity of the business in a utility. That is an exogenous variable that we cannot control directly, but we could reduce the effect on the business through a strategy of internationalization. Complexity analysis can give us information about factors that can help us to establish a proper strategy of merges. In this case the number of employees (Empleados) is not a showing a complex behavior and we cannot advance a problem due to the Spanish labor legislation because the company can fit well the number of the employees to the requirements of the business.

Other important aspect that we can get from the previous graph is that financial variables are less contributing to complexity than physical ones, this is showing two things: first of all, finance are well managed, and later, that production is more complex to manage than financial affairs. This fact is common in every business in a normal situation of the economy. Everybody can understand that it is usually more difficult to design, to get the administrative permissions and contract people to build a gas line than to get a credit. Of course, in a debt crisis things can become very different.

In order to finish I would like to ask you for thinking about this issues:

          Merges can be seen a matter of complexity improvement instead of value creation through rationalization.

          Merges can be analyzed from a strategic but objective viewpoint through a quantitative complexity analysis.

          The physical process of the companies can be much more important for the success of the merge than the financial ones.

          The merge does not end with the constitution of the new board of directors, the merge is a process of several years required to make the resultant company more efficient.

 

 LDS in Saconsulting's Office at Serrano Street

 

Mr. Luis Díaz Saco

Executive President

saconsulting

advanced consultancy services

 

   

Nowadays, he is Executive President of Saconsulting Advanced Consultancy Services. He has been Head of Corporate Technology Innovation at Soluziona Quality and Environment (Unión Fenosa Group) and R & D & I Project Auditor of AENOR. He has acted as Innovation Advisor of Endesa Red representing that company in Workgroups of the Smartgrids Technology Platform at Brussels

 

The Software Complexity

Image of the internals of a Commodore 64 showi...

Image of the internals of a Commodore 64 showing the 6510 CPU (40-pin DIP, lower left). The chip on the right is the 6581 SID. The production week/year (WWYY) of each chip is given below its name. (Photo credit: Wikipedia)

There is an old sentence attributed to Bill Gates that said: “640K of RAM is enough for anybody”. I think that Gates has denied that he was its author. But it is interesting to analyze how complexity is evolving in the software sector.

I began to write computer programs in an old Commodore 64 with only 64 KBytes and a 6510 microprocessor of 8 bits from MOS Technologies when I was sixteen. That computer had a very ancient and simple basic interpreter integrated in its ROM and that limited hugely the programming capabilities of that simple device.

Commodore 64 was a success (especially in Germany) for gaming due to its better graphical and sound capabilities than its direct English competence, the Sinclair ZX Spectrum. In order to take advantage of the “full potential” of the device, many programs were developed directly in assembler (the most similar language to machine code).

Although the initial sentence attributed to Gates can seem stupid today when my last computer has 8 Gbytes of RAM with a microprocessor of two cores and a word size of 64 bits, and even my new Smartphone 4 Gbytes, (and they are not at the highest scale), I must recognize that it does not sound stupid when you have written software in assembler. In that case, it can sound even reasonably, because you can bet that you cannot fill 640 Kbytes of memory with a program write in assembler during a month without using libraries of functions (and without taking data into account).

To make a computer program in assembler is very hard, but in terms of the complexity of the task, the language is very simple. When we talk about the complexity without any metrics, sometimes its concept is not well understood, because we do not distinguish between the complexity of the task and the complexity of the system.

A program write in assembler provides a code that always made exactly what you have written because the assembled code specifies with totally precision how the registers of the microprocessor must process the data. As precision is total, uncertainty is null and, in that sense, the program is complicated but it is not complex.

However, this kind of programming is very error prone because people do not think as a microprocessor. Programmers must translate high level actions involved in formal instructions into simple low level handling of numbers. This uncertainty due to the error prone programming provides a system with high level of uncertainty in its behavior. The result can be a very complex system.

Current systems only have a little part of their operative system and some drivers in assembler, because even the operative system is written in a high level language. A high level language provides direct instructions that are turned into machine code through a computer program known as compiler. A compiler can translate a few hundreds of code lines into a lot of thousands of code lines. The degree of complexity in a high level program is much larger than in a low level program, because the structure of the program can be very big. A microprocessor has only a very limited number of registers (internal variables to make calculations), but a high level program can have any set of them provide much more functionality, however the result can be less error prone because the language is nearer the human one. The program can be complex but the resulting system can have less complexity.

With a high level language it is possible to write a program in a month that fill the memory of an old computer as the Commodore 64 once it has been compiled. In that case, there is not a relationship between complexity and memory size; however, as memory increases, software can be made to handle a bigger amount of data through many new variables and functions that operate with them, although the probability to run wrongly due to an error prone language is lower the final uncertainty at the system can be larger because the number of functions and variables where an error can appear is larger.

Systems are more complex due to a more complex hardware (size and number of registers, number of cores, set of instructions) and due to a more complex software (object oriented programming, multiprocessing programming, program threads, multitasking operative systems, virtualization), but not due to the memory size, although more memory lets a more complex software implementation. It is the structure of the systems what provides a lot of complexity instead of the uncertainty of a very error prone programming language.

If there is a relationship between complexity and size it should be better defined by the Moore’s Law, that established the exponential evolution of the number of transistors integrated in a single microprocessor chip.

Moore's Law. Photo Credit Wgsimon

Graph 1 Moore’s Law on number of transistors in a microprocessor by Wgsimon (Source Wikipedia)

We say that something is complex when it can evolve in an unpredictable way under an unexpected event.  With an assembled program in a microprocessor with an accumulator register and two index register things are very hard to implement but errors could be easy to identify, but how can we know what is wrong in a system with dozens of processes that are be written by several programmers each, with different software libraries written in different computer languages, running simultaneously in a quad-core microprocessor, connected to several networks and fed with data from hundreds of computers, if it hangs suddenly?

Of course, system administrators have tools and techniques to identify those problems when they have happened, for instance, some wise computer scientist invented logs, but it would be more useful to predict some faults before they occur specially in certain industrial systems.

I want to notice that not all this complexity is bad if it is well managed, namely, if the system is designed for simplicity. For instance, in a modern computer, when a process hangs not all the system stops, only one of the cores, the system will run slower but other critical task can continue without produce a collapse at the whole system. The collapse will depend on the importance and the links of that process to other processes running at the system. Other process could analyze the system automatically and find that there is a hanged process and stop it permanently or restart it in order to provide a better performance. In a similar way, the complexity of the system could be measured through some control variables in order to anticipate a malfunction.

Sometimes complexity can be attacked with complexity, as in a war, fire can be the best answer to fire. This is not so odd; in order to reduce complexity and stabilize a system, we can use a controller device. Controller devices reduce the number of the most probable states of a system, but they can be complex systems itself, of course they must be tuned to do a certain task properly. Thus, the activity of complexity management can be seen. We must design and tune some kind of complexity control device in order to avoid that the system can collapse under unexpected events.

In the last example, we would take advantage of a complex multicore microprocessor to run a critical process isolated from the rest of the system that in some occasions can be a good strategy to preserve the system in a state of proper performance. Here, hardware complexity provides less software complexity because software has been developed in order take advantage of it. Complexity management is not simply a matter of trying to reduce the structure or the uncertainty; it is a matter of making that the system is working far from its maximum complexity. Any complex systems should be always designed from a complexity viewpoint.

A software system can have a few code lines and be very unstable, and another one can have a lot of code lines and be very robust if it was designed thinking in complexity. Unfortunately, practically no commercial computer system is working today fully designed from a systemic viewpoint, as hardware providers are different to OS providers, and these ones are different to application providers. In any  system applications from many different sources can be running, and this fact makes more important that this complexity is taken into account by IT managers.

 

 Azul Rojo 0003

 

Mr. Luis Díaz Saco

Executive President

saconsulting

advanced consultancy services

 

   Nowadays, he is Executive President of Saconsulting Advanced Consultancy Services. He has been Head of Corporate Technology Innovation at Soluziona Quality and Environment and R & D & I Project Auditor of AENOR. He has acted as Innovation Advisor of Endesa Red representing that company in Workgroups of the Smartgrids Technology Platform at Brussels

A Frequency Analysis of Economic Complexity

oscilloscope

oscilloscope (Photo credit: Wikipedia)

In information theory, complexity is the number of properties transmitted by an object and detected by an observer. Imagine that you, as any manager, try to predict the expenses of your company. You have some contracts with different providers. There are some expenses as taxes that must be payed every year; you are paying some invoices monthly as the cost of electricity and telephone, and you can be paying the raw materials every three months. In a similar way this can be done with incomes.

In this example your cost function is the integration of three different cost frequencies. In order to make the problem clearer, I am going to consider that the payments are continuous following a sine curve, instead of a set of discrete payments, but the reasoning continues being valid.

In that situation, our benefit function should be an aspect similar to this one:

GA201302 G1

Graph 1 Generated Periodic Benefit Function in the domains of time and frequency

As we can see the signal at the frequency domain has three frequency components. This function has been generated by a computer then it can be easily replied. In a perfect situation we can recover the full signal, from a part of it, and we could make precise forecasting about the requirements of credit at any time.

Following the initial definition of complexity, this company would have an amount of complexity of three, we need to know only the amplitude of three frequencies in order to know precisely the full information of the system.

Following different approaches to complexity, a company like that would not be complex. A complex system has a certain degree of uncertainty. When the behavior of a system can be predicted exactly, we consider that the system is complicated instead of complex.

The previous signal can be got from a set of points at the time domain. An electrical engineer named Nyquist demonstrated that a signal like that (band limited) can be perfectly recovered if the signal is sampled at the double of the highest frequency. This is the dream of any stock exchange investor. Unfortunately, the stock exchange has not a behavior like that. The stock exchange is really complex instead of complicated.

A complex system has not only a well-defined structure; it contains certain amount of uncertainty. In the previous example, we can know the mean of our phone bills but some months we receive different values. That is the reason why, management is more related to complexity management than perfect forecasting.

GA201302 G2

Graph 2 Complex systems of different degree of uncertainty (1, 5, 6 and 10) in both time and frequency domains.

A very robust economy could have the behavior of the first plot at the previous graph. For instance, we do not know exactly the cost of the electricity or the phone bill but it has a little variability. In that situation we cannot recover perfectly the behavior of system (the signal is not band limited) but as the main frequencies are very clear we could get a good approximation sampling the signal with a frequency higher than Nyquist’s.

This case can be the case of a little startup business. Initially there are not incomes and the expenses are mainly fixes. The company can have losses but their costs can be very predictable as the main bills are the rent of an office, electricity and phone, and interest of loans. It is very easy to manage. There is not a direct relationship between benefit and manageability.

With the second plot we can see that the behavior of the system is preserved at the frequency domain but it is more difficult to identify the system at the time domain. We could know nothing about the behavior of the system in the futures instants with precision, but we can have a better knowledge about the mean behavior along time using a low pass filter.

This does not seem interesting in order to manage a little business, but analyzing the low frequency of the shares of a company at the stock exchange, we could get better throughput of the trading. This can sound odd, but it is well known that the prices of oil increase in winter and decrease in summer because the demand is periodic. Many companies can be affected by different periodic events with different frequencies. Mathematical models for trading are based on similar suppositions.

Looking at the last plot you can see that in a very uncertain situation we can know nothing about the behavior of the system. That is the reason why mathematical economic models cannot work in a turbulent environment. In that case, investors usually search for a portfolio with lower volatility.

In the case that you are a manager of a complex company, the uncertainty is inside you own company and you must make actions in order to reduce uncertainty. There are many actions that can be done to do it, for instance, contracting insurances or flat tariffs. Although the benefit can be lower, you will be able to control the company better under unexpected events.

In order to finish, I would like to point out that as the frequency domain plots shows, uncertainty and structure are similar. The differences are that structure is characterized and uncertainty is not. In a real case, functions are not totally band limited, real structures have an infinite set of harmonics. Increasing structure, we are increasing the energy at the higher frequencies as we do when the uncertainty (characterized here as white noise) increases.

Considering complexity as the energy provided by the addition of structure and uncertainty has a clear physical sense. Thinking in the band limited signals and Nyquist’s theorem, robustness can be understood as the differences between the energy of low frequency and the energy of high frequency that can make our system more predictable through sampling (the measures that the controllers get in order to make managing decisions).

On the other hand, this concept of robustness is showing us that uncertainty is not better or worse than structure. Uncertainty can be better than a structure with the same amount of energy because the distribution of the energy at the frequency domain is flat (considering white noise), the component of high frequency is limited and the energy is distributed between low and high frequencies, however high frequency structure does not provide value if the value of frequency is higher than the sampling one. For the manager is not signal yet, it is only noise.

Although this discussion is good in order to analyze in theory some points, a frequency analysis of an economy can become useless in a real case and it is better to use other techniques in order to measure complexity, in the same way as it is better to use the Discrete Cosine Transform in order to compress TV signals than Fourier’s Transform.

 

  

 Azul Rojo 0003

Mr. Luis Díaz Saco

Executive President

saconsulting

advanced consultancy services

 

   Nowadays, he is Executive President of Saconsulting Advanced Consultancy Services. He has been Head of Corporate Technology Innovation at Soluziona Quality and Environment and R & D & I Project Auditor of AENOR. He has acted as Innovation Advisor of Endesa Red representing that company in Workgroups of the Smartgrids Technology Platform at Brussels

 

The Irreversible Process of Globalization

Earth

Earth (Photo credit: tonynetone)

An irreversible process is a process in a system that changes from a state to another one losing energy. This energy cannot be recovered if the process is inverted. The magnitude used in order to measure irreversibility is entropy. Entropy can be considered as a kind of energy that cannot provide work.

In nature, all processes are irreversible but some of them can be more entropic than other ones. The existence of entropy implies that we cannot go back in time to a previous state. The entropy of universe is always been increased but there is not a reason in order that the entropy of a system cannot be reduced. This fact is possible but we will need to employ a lot of additional energy to do that increasing much more the entropy of the environment.

Economy can be seen as a thermodynamic system, a thermodynamic system is defined by a set of state variables that follows a certain rules and principles. A system is defined but a certain number of state variables, but other additional variables can be obtained from the state variables. The minimum number of the variables that define the state of the system is the dimension of the state space.

Economists use a set of variables to characterize the state of an economy, for instance: savings, investment, employment, debt, GDP.

Entropy usually is not a magnitude used by economists, but it can be calculated as a macroeconomic value from a detailed microeconomic analysis, or from other state variables. Entropy would be proportional to the logarithm of the number of possible microstates.

Entropy is well understood as an energetic magnitude but in simpler words it is related to the degree of disorder in a system. In the statistic approach, it is related to uncertainty. Economy as any other process in nature tends to increase entropy in a natural way. That is because there are much more disordered states than ordered ones.

As I have said before, it is possible to put a system into a more ordered state but it can only be done spending a lot of energy (or in more economic terms, spending a lot of money) and increasing the entropy of the environment (or its uncertainty).

Following this scheme is easy to understand some current economic phenomena. For instance, the crisis of the Euro can be explained under an entropic-based reasoning. The Euro establishes links among different economies with different economic state and production capabilities, without a central controller and a lot of economic subsystems with an own controller. Following a natural behavior all countries should finally reach an equilibrium with a similar relative state, but this is not possible as different controllers and rulers are trying to avoid the deterioration of their own economies getting the effect of preserving the best economies(in higher state of energy) avoiding to improve the state of other economies (in a lower state of energy). The efforts of countries as Germany to preserve its state in an interconnected economy require a lot of energy that must be obtained from its environment (the other countries of the Euro zone). Namely, Germany is not paying the crisis of other countries as many people think. It is avoiding the natural process of equalization, forcing other countries to spend much more money in order to preserve the production level. Although this can be seen as something odd for them, this is real meaning of the risk bonus of the debt. It is only entropy, energy (or money) that cannot provide work (or production). Entropy can explain why German debt is getting negative interests too due to the need of the investors at the peripheral countries of searching for safer investments. This is showing that it can be possible to reduce its own entropy at the expense of the environment.

I know that if we try to do a cause-effect analysis, many people will say that I am changing cause by effect. But this is not important here. I am looking at the final equilibrium state. The aim of the Euro is equalization not differentiation. I am not judging the German government, as it is only satisfying its own function and probably they do it better than other governments but the problem here is not only a problem of local management, it is systemic too, as any action has a reaction, markets were responding only to the different policies in different countries following the natural laws of entropy making the local problems larger.

Germany should understand that with the Euro, competition must not be among countries, only among companies, because preserving a systemic problem would destroy its own economy when the entropy of its enviroment cannot be increased easily if other countries finally collapse. And other countries must understand that if they do not apply good policies they will be the first destroyed economies.

The solution to this problem, as systemic, will need a systemic approach instead of local solutions as some economists have asked for reducing the Euro uncertainty (entropy) with actions from the central bank. With a systemic approach all people win.

Looking at the world we can see the same effect due to the globalization of the economy. Nowadays there are some economists that are searching for an offender of the world economic crisis. There is only one causer, the globalization process, that is natural and then unavoidable, and the different policies that are been applied in different countries.

Globalization is not bad itself. It is based on the other natural trend of systems to increase complexity. Complexity is another kind of energy, but part of this energy can be used to provide production, it is not only entropy. Increasing complexity we can increase production. For instance, an economy with international commerce is more complex than an isolated one but we can get new raw materials and access to new sale markets in order to increase production. Complexity is bad when it has a large entropic component, becuase it turns the system into an unmanageable one.

World Complexity

Graph 1 Growth of World Complexity. Source Ontonix

Back to the previous example Euro was developed in order to provide a system with higher production capability, this objective needs to establish links among economies (increasing complexity) although a unique currency was an attempt of simplification in order to avoid a growth of entropy. Entropy does no grow excessively by the Euro. Entropy grows excessively because the system is not prepared to take advantage of it as there is not a common economic policy. The system is not evolving freely in a uniform way. This problem of a non-uniform system was advanced by economists before the creation of the Euro, and they tried to avoid it as it was established a convergence framework in order to enter the club.

If we analyze the entropy of the world it has been increased from many years ago as the following graph shows.

World Entropy

Graph 2 Growth of world entropy. Source Ontonix

As the links between the economies of different countries have been increased every day, the economy suffers a natural irreversible process. As have said beginning this discussion this means that we are spending in the world a lot of resources that cannot provide production. And we cannot go back in time to a previous state. If we compare this case with the European one, we can see that there are some differences, in the former one the Euro zone has an additional environment (the rest of the world) but in the latter one we cannot find an external economic environment where uncertainty can be increased to reduce ours.

There are two actions that can be done to reduce the hazardous effect of this irreversible process: We can reduce the number of states with high probability increasing control and rules. The effect of this action is to reduce the uncertainty although we can reduce useful functionality too. On the other hand, we can increase links between economies like promoting internationalization for instance, this will increase the maximum complexity. This last action will increase entropy too but the benefit is got due to productive capability will be increased, not only entropy. But the higher the complexity, the more difficult to manage the system.

In order to finish, I propose to think about this issues:

          I would like to indicate that I do not think that the globalization process is good or bad, I have said only that the process of globalization is irreversible (a lot of energy or money that have been applied in the process could not be recovered if we would try to go back to a previous state). Then the efforts of the governments should be done thinking that the current problems are systemic instead of only local.

          Final entropy will depend on the way that we follow in order to get stability. In fact, many recommendations of international organisms are trying to get that the process will be less entropic as possible.

          As different countries have different productive resources, and climate has a strong influence in the economic activity, a certain level of order in the world system will be preserved always. Different raw materials, different technology level and different education will preserve different level of wealth. Education and technology can be finally shared but raw materials and climates are always localized.

          It is important to notice that the equality is not absolute in nature, the process only will make equivalent the variables at a macro level. In the same way that in a gas at a certain temperature the particles have different individual kinetic energy, although connected economies tend to get an equivalent macroeconomic state, this does not imply that all people and businesses will have the same wealth level at the microeconomic level. In every part of the world always there will be richer and poorer people.

          We should not build a system that we cannot manage because it will finally collapse.

 

 Azul Gris 0002

 

Mr. Luis Díaz Saco

Executive President

saconsulting

advanced consultancy services

 

   Nowadays, he is Executive President of Saconsulting Advanced Consultancy Services. He has been Head of Corporate Technology Innovation at Soluziona Quality and Environment and R & D & I Project Auditor of AENOR. He has acted as Innovation Advisor of Endesa Red representing that company in Workgroups of the Smartgrids Technology Platform at Brussels