Information is not power but knowledge is it

In the era of big data, most people think that information is the final source of power; however, this is an erroneous concept. Wrong concepts drive to bad actions and finally to a loss of real power.

Information can be seen a set of data samples about a phenomenon. Knowledge is much more than this. Knowledge should be provided by a model of behavior of that phenomenon that let us to make predictions about the future. Power resides in the anticipation of the future that let us to take advantage of it.

This is not a new concept. Bertrand Russell wrote about how Chinese emperors protected Jesuits because they could predict better eclipses than their astronomers and this fact could be used to show the power of the Empire and its ruler to the people.

From ancient times, even political power was linked to knowledge, although for common people it was related to magic instead of science.

Espionage services are known as intelligence services for the same reason. Although the hard work of espionage is gathering hidden data, what provides value to a government is not the raw data but the processed data in order to make right decisions. Intelligence can be seen as the process of simplifying a large set of data in order to anticipate an action that provides us an advantage to reach some desired target.

In the era of big data this is valid again. The role of the data scientists is basic in order to take advantage of the gathered information by a system working with big data. However we are in the beginning of this era and the systems processing a large amount of data are very naïve yet, although this technology is increasing its performance fast day by day.

There are several things to be considered when we are working with a large amount of data:

  • We need to know the limits of information processing.
  • We need to know the issues related to the quality of information.
  • We need to know the effects of the noise in the data.

Gathering data do not provide itself a model of reality. This was scientifically demonstrated through the Nyquist’s theorem that shows that in order to recover a periodic continuous signal it is necessary to sample it at the double of the higher frequency of the signal. In simpler words, in order to get a perfect knowledge of a periodic phenomenon it is necessary to gather certain amount of data. This theorem goes farther and shows that sampling the signal at higher frequency cannot provide more information about it. In simpler words, getting more than the required information about a phenomenon will not provide more knowledge about it.

This is important in big data related systems. The knowledge about a phenomenon will not be improved increasing the gathered data when we have got a perfect model of it yet. Many companies can be selling nothing around the gathering information business.

Another important aspect is that models are a source of uncertainty because we need to make assumptions about reality in order to create them.

First of all, most statistical analyses try to simplify any phenomenon supposing that the problem is linear, however, this kind of simplification cannot be assumed many times because reality usually is non-linear.

When data scientists try to model a system, they usually try to use polynomials. Even if the system is really polynomial, in order to recover the system we need to know the degree of the polynomial. We will not get better results trying to fit a polynomial of higher degree to a set of data. The errors that can provide some kind of action can be huge.

In other words, the knowledge about the physics of the system will help us to calculate better a model of the system that try to use only computing brute force. Knowledge about the physics of the system is always better than the “stupid method” of the computer programmer.

Another issue to be considered is the following: Thinking only in Nyquist’s theorem we can recover a periodic signal although the data is biased if we sample the signal at the correct frequency. This is true, but, the reason is that we are applying scientific knowledge of physics instead of the “stupid method” of the computer programmer. We assume that the signal is periodic. However this is not true if we have not that knowledge about the signal. If the signal was polynomial, the bias can be important.

Imagine a system that we know it can be modelled as a polynomial. That kind of signal is not bandlimited and Nyquist’s theorem cannot be applied. If we only have data from the left side, all of them will have the same sign (for instance positive), and the sign of the right side will depend on the degree of the polynomial. If it is odd, the sign will be contrary but it if is even, the sign will be the same.

Figure 1

Figure 1 The same set of three numbers can be interpolated through the functions x2 and –x3/3 -2x/6

This fact is an example about the quality of the data. Not all the sets of data have the same quality to provide a model of a system. In order to make a good interpolation, it is required that the pieces of data are distributed uniformly on the entire system domain.

In the era of the big data, this is especially important, and many times is not well analyzed. Although I can think about some example related to social affairs, I will try to be politically correct and I am going to put an example about myself. Imagine that a job seeker company is trying to classify me through a big data system in order to determine which job is better for my profile. I have been working in a pure science research center as a researcher and I have been working in a quality engineering company as a manager. The first fact would say about me that I am interested in creative positions trying to provide new things for mankind out of the standard way of working; however, the second one would say about me that I am an expert in standardization of tasks and I would be interested in defining and following rules.

That profile is too complex to be included in an easy classification, and it would be even more complex if we continue advancing along time. Any biased information to the left or the right of the temporal axis would drive to an incorrect classification. In this case, it is important to analyze the whole profile and to have a more complex classification method for more complex profiles. One of the reasons for big data analysis is the analysis of more complex situations. The more complex the system is, the larger the amount of required data is, and the more complex the analysis procedure is. As in the example of espionage and intelligence, large scale analysis requires large scale intelligence. That is the reason why the development of artificial intelligence has become fashionable in the era of big data.

The immunity to noise is very important when we are trying to define the model of a system. What happens when there are pieces of incorrect data inside the gathered data? Imagine a linear system and an analysis algorithm that tries to determine the order of the system that fits better the data.  A little shift of a few data can produce a result of increasing the order of the system that fits perfectly the data. In any system for analyzing data, we need some kind of method to filter the data trying to avoid the negative effect of noise.

Again this is not a definitive solution. Filters would eliminate special cases. Filters would search for a mean behavior eliminating the data far from that mean behavior. This kind of solution applied to a complex system could not be good enough.

From this point, we can understand that complex systems are not easily modelled. The use of generic modelling techniques for complex systems under great uncertainty can drive to very bad results.

Knowledge is power, but we must be assured that we have got the proper knowledge. Scientific knowledge advances along time and some people produce great leaps in the knowledge owned by mankind that improves our technology and our lives. Any model has a domain of application, out of it, it cannot be considered knowledge. Newton’s physics about space and time can help us to determine the time we will spend in a travel by car but without the Einstein’s physics we could not have satellite communications. Namely, limited knowledge provides limited power. The reason for basic research is a matter of power too. A better knowledge provides a higher power to mankind.

Scientific knowledge is linked to the limits; however, the use of computer algorithms without taking the limits into account drives to very wrong decisions.

Back to the previous example, including the big data and the social networks, I have a lot of contacts in the quality business, most classification algorithms would produce an analogy between my profile and the profile of these contacts, however, although it is true that I have got a great knowledge about standardization and managing methodologies like them, my role inside a quality organization would totally different because I was an innovation manager. Although the role of a quality expert is to provide uniformity in the procedures in order that things can be done in repetitive and secure way increasing the capability of the managing staff to assure the goodness of the provided product and services, the role of an innovation manager is to drive changes in the organization in order that it can provide new products and services. Classifying people through a single common word does not define the different role of different persons in an organization. This cannot be done simply counting the number of links among different kind of people.

On the other hand, the risk of assuming different things as equal in order to simplify the analysis process in a computer program is not economic. Day by day many businesses are driven to the majority because through internet it is possible to provide expensive services at a low price due to the huge volume of market that compensate the required investment. If there are a few unsatisfied clients due to an incorrect segmentation do not produce a great economic loss, and providing different products for them could not be so profitable.

The risk is related to knowledge and power. In a global market companies will get more incomes from mediocrity than investing in improving the quality of their products. High quality products would never be interesting if educated people are wrongly classified. Always there has been a different demand by common people and educated one. Think for instance in music. Cult music, commonly named classic, has existed from many centuries ago sharing the time with the popular one. In the previous centuries, writing cult music provided much more incomes than writing popular music because a rich man could pay much more than a little group of poor people. Nowadays, the situation is contrary. No man can pay as much money as a musician can get selling his song on iTunes at the economic price of one dollar all over the world through internet. The result is obvious. Everybody knows now who was Mozart and he was famous when he was alive, but nobody knows current musicians writing cult music, and everybody knows the name of many musicians writing popular music that are in touch with the higher social classes due to their huge earnings.

The question arising here is: If technical and social improvements are driven by advanced knowledge provided by selected people, can the future society improve in a world built for mediocrity? There is something that can help to avoid this: competition. Man reached the moon due to the space competition between USA and USSR instead of a demand of people to do it. Competition among companies can produce the search for better products and services for people that do not demand them although market trends can be pointing to mediocrity. Competition among companies can produce better tools for classification and market analysis, and it can impulse the customization of solutions for different kind of people because a better segmentation reduces the cost of that customization process.

Competition is what will be demanding new and better products and services and will make mankind advance after the era of the big data. A society or organization where competition, making new things, or enjoying with different thoughts and solutions than majority are penalized will be a society with less power driven to be stuck into mediocrity.

In order to finish this discussion, I would propose that you take complexity analysis techniques into account in order to cope with a huge amount of data where traditional statistics and classic modelling techniques cannot be effective enough. This techniques can provide knowledge about the structure of the system that will be useful to increase our capability to make better decisions.

 

 Azul Gris 0002Mr. Luis Díaz Saco

Executive President

saconsulting

advanced consultancy services

 

Nowadays, he is Executive President of Saconsulting Advanced Consultancy Services. He has been Head of Corporate Technology Innovation at Soluziona Quality and Environment and R & D & I Project Auditor of AENOR. He has acted as Innovation Advisor of Endesa Red representing that company in Workgroups of the Smartgrids Technology Platform at Brussels collaborating to prepare the research strategy of the UE.

 

Advertisements

October 2014: Innovation in order to reduce the complexity of the business

From a strategic viewpoint there can be a lot of routes in order to reach a certain positioning for our company. Thinking in mathematical terms, for a set of managing variables the best route to be chosen would be that one minimizing the money spent. It could be seen as a geodesic curve on a hypersurface (the subset of states that we can reach from our managing actions) of the space corresponding to the variables defining the state of the company.

I am going to draw a simple example to illustrate this fact.

Imagine a simple company with the following function of production:

o = C  x

Where o is offer, x amount of raw material

And the following contour conditions from the law of offer and demand:

d = -A p+dt

Where d is demand, dt total demand and p price of product.

The state of our simple economy is defined by two variables: x (amount of raw material for production) and p (price of the product) that define sales and surplus supposing the cost of raw material and the slope of the demand curve are constant. Making equal offer and demand to maximize benefit both variables will be linked and the state of the system will have a lower dimension (when there is no surplus).

B = Bs-S

Where B is benefit, Bs is benefit from sales, and S cost of surplus.

Understanding the positioning as the market share that we want to reach, our strategy would be defined by the percentage of demand that we want to serve. This value would define directly the price of the product and the quantity of raw material that we need to get. We have only a degree of freedom to define our strategy of prices if we want to maximize benefit.

There is a natural equilibrium point where the curves of the prices of offer and demand match providing an equilibrium value for the quantity of production, following the law of offer and demand. The optimum value would provide a unique state for our production and prices. Then, as there is only one possible state, there is not any need of a price strategy. In markets at perfect competition, the minimum price would be imposed finally in this way and our sales would be linked directly to the sales of other competitors.

The law of offer and demand

The law of offer and demand

A real economy is much more complex than this one. In a real case, the curve of demand would not be linear. There would be other competitors at the market serving the product, and if the market can be fulfilled by other competitors we cannot take the offer and demand curve as a direct reference of our sales. The cost of raw materials would be under its own curve of offer and demand too. Then, there would be many more variables involved and defining the state of system although we could only act on our price of the product and our amount of raw material for production again.

Thinking in this way we can see that the result of any marketing strategy is, in practice, unpredictable.

Strategy is not a matter of solving a set of equations, but a matter of anticipating the movements of competitors. We need a strategy because we are in competition, in other way we would need only to minimize some cost function.

Strategic thinking requires capability of prediction that it can be got only under certain states. For instance, in the previous example, if the offer of competitors can fulfil the total demand we cannot assume that we can sell something if our prices are not below the prices of competition. We can only predict sales at lower prices. This is what happens in a crisis. In this case, the demand is being reduced and only the companies that bet for reducing prices (modifying the offer curve) can control their own company in the physical sense of the word control. As production costs impose a limit on the price of the product, the strategy that fits better that situation following this economic model is a strategy of cost leading.

And what about innovation? Well, in this case, innovation is not necessarily a drawback, but innovation should be driven to improve production reducing the production cost. Process innovation would be the logic innovation related strategy. This would be to swim with the tide, or the low risk innovation strategy.

A more interesting approach is forgetting mathematical models and using model free techniques to analyze our activity. Risk can be seen as something related to perception or subjective. In the case of we are talking about mathematical risk, considering risk as not objective can be considered valid too because although mathematical risk is not observer dependent, is model dependent. Sometimes to swim against the tide can produce better results than with the tide.

If we look back the previous example, the problem with predictability is related to a fulfilled demand, then, a product innovation strategy can be a solution too, because a new product will have a new and not fulfilled demand. What are the pros and cons of this? They are obvious. Although the demand was not be very high, you can select a high price strategy that you cannot use before avoiding the limits imposed by the cost of production, and less competition can imply less complexity (less variables and uncertainty to be considered to define the state of the business), as the initial example showed.

Mature markets can become more complex than novel ones because there is a natural trend to increase entropy in every system as time goes by.

A product innovation strategy, even in a crisis, can provide simpler businesses than a cost leading strategy although many people can think that it should be swimming against the tide. The important thing is to find the proper market or market niche. Probably, they are right with the comparison. As swimming against the tide, you will need the required energy to do it, or in economic terms, the required money to do it. Sometimes, there are not only better or worse strategies but realizable and unrealizable ones too.

Swimming with the tide is comfortable, but comfort usually is not synonym of survival, because not all the competitors can win following the same strategy. The best soldiers are trained to accomplish task under the worst conditions, and there is a good reason for this.

 

Gris Azul 0001Mr. Luis Díaz Saco

Executive President

saconsulting

advanced consultancy services

 

 

Nowadays, he is Executive President of Saconsulting Advanced Consultancy Services. He has been Head of Corporate Technology Innovation at Soluziona Quality and Environment and R & D & I Project Auditor of AENOR. He has acted as Innovation Advisor of Endesa Red representing that company in Workgroups of the Smartgrids Technology Platform at Brussels collaborating to prepare the research strategy of the UE

 

Benefits of the Certification of R & D & I Projects

English: Picture of the medal of honor

Picture of the medal of honor (Photo credit: Wikipedia)

BENEFITS OF THE CERTIFICATION OF R & D & I PROJECTS

Luis Díaz Saco. Head of Corporate Technological Innovation. Soluziona Calidad y Medio Ambiente. Expert Technician (Auditor) on R & D & I Projects of AENOR

(The text corresponds to a paper presented in the 2003 Conference on Innovation at Palacio de Congresos of Madrid. Nowadays Luis Díaz Saco is Executive President of Saconsulting Advanced Consultancy Services)

Congreso Innovación 2003

Congreso Innovación 2003 - Sala Roma

OVERVIEW

Norcontrol a company of Soluziona Quality and Environment, has been the first company in Spain to certify a project according to the standard UNE 166001 for management of R & D & I projects, this experience has been an important step forward in the systematization of its innovation processes.

The activity of R & D & I has a few special characteristics that should be taken into account when defining an R & D & I project. These differential aspects identified for years by specialists are especially important when attempting to give a boost to that activity in the business environment and to establish links of stable relationships between the public research system and private companies with the aim of increasing competitiveness.

The use of a methodology in the development of R & D & I projects facilitates decision-making about investment and it guarantees the further use of resources dedicated to these activities. The new standard UNE 166001 for R & D & I project management provides a framework suitable for this activity, and its use is associated with additional benefits such as the establishment of a common language for all those involved in the generation, development and marketing of the project.

The certification process that involves the integration of public entities and independent technicians in the process of R & D & I of the company reduces the risk associated with the activity. Finally, obtaining the certificate results in new benefits for the company as the improvement of its corporate reputation, and for the project itself, which acquires a recognized own entity that makes easier its development, this is especially important for projects carried out by several entities.

THE EXPERIENCE OF SOLUZIONA QUALITY AND ENVIRONMENT

The companies that compose Soluziona today have been characterized since their initiations by the introduction of new products and services to keep themselves in a proper position of their markets. In 2002, Norcontrol, a company integrated in the area of quality and environment of Soluziona, has been the first company in Spain that has received a certificate of R & D  & I project from AENOR due to the research and development of a novel technology with application to mechanical inspection systems. This milestone, has involved a significant leap in the definition of systems and the management of R & D & I projects, it supports the consolidation of the culture of innovation at the company, and the systematization of the processes of innovation that began in 1998 with the creation of an R & D program supervised by a Committee of technological innovation chaired by the CEO of the company.

This experience places Soluziona Quality and Environment in a favorable situation to transfer these benefits to its most important projects, and its customers, as well as to continue with the systematization of these processes by implementing a management system of R & D & I according to the UNE 166002 standard.

CHARACTERISTICS OF R & D & I PROJECTS

All R & D & I activity requires three key components:

  •          An idea.
  •          Human and material resources to develop it.
  •          Sufficient funding to cover the costs of development.

Organizations can have any or all of them; those who combine these components effectively in a systematic way are more capable of achieving leadership in its competitive environment.

While the basis of all activities of innovation is in the ideas, the key to success is in the appropriate systematization of activities from the generation of ideas until its transformation into products that reach the market.

Ideas are the key component of the innovative activity. By its own nature, the good ideas are not only difficult to generate, are difficult to manage within the organization (to get resources enough to develop them), and they are difficult to be sold internal and externally (to get funds to develop them). However, these troubles for the development are a source of competitive advantages if they pass the test.

An essential feature of any innovation system will be “to make the difficult thing easy”, and to put the idea on a proper support to manage the development and the internal and external dissemination. This is the project of R & D & I that can be identified as the core for the deployment of any innovation strategy.

The R & D & I project is associated with a lower risk than the idea from which it comes. An R & D & I project must have a limited risk; it must be possible to identify the technical component and the economic component of that risk and this one must be able to be estimated in monetary units to be able to be compared to the benefit. I.e., the project should be itself the main way to control the risk that is associated with the innovation activity.

R & D & I projects have characteristics of investment projects. They are aimed at a goal clear and known, and oriented to the profit (business or social). However, the existence of technical risk associated with novelty, involves a certain uncertainty for achieving the results. The outcome is uncertain, and its development should be based on a precise definition of activities that ensures the possibility of analyzing the working methods used to learn from mistakes. Monitoring and control procedures must also be according to the associated risk, and it is very important to spend time and resources initially to the analysis of the idea and the study of how to implement it.

A management methodology for this type of projects that meet the specific needs of this activity is needed.

THE STANDARD MANAGEMENT OF R & D & I PROJECTS

A project of R & D & I with high possibilities of success should cover the following aspects:

  •          The idea is clearly defined and studied.
  •          It defines clearly the work to be carried out and the necessary resources.
  •          It facilitates decision-making:

                             – To the proponent unit, that usually has a manager that analyzes the novelty of the idea.

                             – To the Funding unit that demands an understanding about the return of the investment.

                             – To the executing unit that needs to know if it can assign resources to this activity.

                             – To the directorate that keeps compliance with standards and the business strategy.

  •          It includes mechanisms to control and to reduce risk.
  •          It includes mechanisms and other criteria to introduce changes into the project.

The standard UNE 166001 covers these aspects from a general viewpoint. It is applicable to different types of organizations and different types of projects (basic research, applied research, technological development and innovation), which makes it a reference framework that can serve as a common language for researchers and entrepreneurs.

This standard provides guidance for the preparation of the basic information of a project of R & D & I. conforming to it ensures the proper analysis of the idea from the study of the State of the art, definition of tasks and adequacy of the human and material resources to them, control of the project risk, establishment of mechanisms of changes in the project on the basis of the progress, needs financing, management of quality, and exploitation of the results.

These aspects may be covered in accustomed to the activity of R & D & I organizations and have their own methods of project management. However, fitting an external standard guarantees:

  •          A common language to communicate with entities with either similar and different objectives.
  •          The setup of easier collaborations.
  •          The training of the Organization about the importance of the activity and systematization.
  •          Using criteria generally accepted for good management practices.
  •          Benefits associated with the certification process and the certification itself.

BENEFITS ASSOCIATED WITH THE CERTIFICATION PROCESS

During the certification process a technical specialist analyzes the project and evaluates compliance with the standard, and the quality. Then an independent expert evaluates the project and determines those activities that are R & D and those ones that are innovation (identifying those ones that have a higher technical risk). This process helps the organization to identify, estimate, and reduce the risk of the project and then, facilitating decision-making.

Performing a subsequent certification of expenditures will assign a value to the project that may be useful as a scheme for the distribution of the benefits among the different participating entities and other entities. This certification of expenses is compatible with the tax deductions of the Ministry of Treasure.

The certification is itself an acknowledgement of the quality of our R & D & I activity, and it can help to improve the corporate reputation of organizations. This is especially important in those, both public and private, whose objective is to research and development.

These benefits associated with this merit of the project admitted by third parties turn it into an asset of the company, even before its own execution.

 

 

The Software Complexity

Image of the internals of a Commodore 64 showi...

Image of the internals of a Commodore 64 showing the 6510 CPU (40-pin DIP, lower left). The chip on the right is the 6581 SID. The production week/year (WWYY) of each chip is given below its name. (Photo credit: Wikipedia)

There is an old sentence attributed to Bill Gates that said: “640K of RAM is enough for anybody”. I think that Gates has denied that he was its author. But it is interesting to analyze how complexity is evolving in the software sector.

I began to write computer programs in an old Commodore 64 with only 64 KBytes and a 6510 microprocessor of 8 bits from MOS Technologies when I was sixteen. That computer had a very ancient and simple basic interpreter integrated in its ROM and that limited hugely the programming capabilities of that simple device.

Commodore 64 was a success (especially in Germany) for gaming due to its better graphical and sound capabilities than its direct English competence, the Sinclair ZX Spectrum. In order to take advantage of the “full potential” of the device, many programs were developed directly in assembler (the most similar language to machine code).

Although the initial sentence attributed to Gates can seem stupid today when my last computer has 8 Gbytes of RAM with a microprocessor of two cores and a word size of 64 bits, and even my new Smartphone 4 Gbytes, (and they are not at the highest scale), I must recognize that it does not sound stupid when you have written software in assembler. In that case, it can sound even reasonably, because you can bet that you cannot fill 640 Kbytes of memory with a program write in assembler during a month without using libraries of functions (and without taking data into account).

To make a computer program in assembler is very hard, but in terms of the complexity of the task, the language is very simple. When we talk about the complexity without any metrics, sometimes its concept is not well understood, because we do not distinguish between the complexity of the task and the complexity of the system.

A program write in assembler provides a code that always made exactly what you have written because the assembled code specifies with totally precision how the registers of the microprocessor must process the data. As precision is total, uncertainty is null and, in that sense, the program is complicated but it is not complex.

However, this kind of programming is very error prone because people do not think as a microprocessor. Programmers must translate high level actions involved in formal instructions into simple low level handling of numbers. This uncertainty due to the error prone programming provides a system with high level of uncertainty in its behavior. The result can be a very complex system.

Current systems only have a little part of their operative system and some drivers in assembler, because even the operative system is written in a high level language. A high level language provides direct instructions that are turned into machine code through a computer program known as compiler. A compiler can translate a few hundreds of code lines into a lot of thousands of code lines. The degree of complexity in a high level program is much larger than in a low level program, because the structure of the program can be very big. A microprocessor has only a very limited number of registers (internal variables to make calculations), but a high level program can have any set of them provide much more functionality, however the result can be less error prone because the language is nearer the human one. The program can be complex but the resulting system can have less complexity.

With a high level language it is possible to write a program in a month that fill the memory of an old computer as the Commodore 64 once it has been compiled. In that case, there is not a relationship between complexity and memory size; however, as memory increases, software can be made to handle a bigger amount of data through many new variables and functions that operate with them, although the probability to run wrongly due to an error prone language is lower the final uncertainty at the system can be larger because the number of functions and variables where an error can appear is larger.

Systems are more complex due to a more complex hardware (size and number of registers, number of cores, set of instructions) and due to a more complex software (object oriented programming, multiprocessing programming, program threads, multitasking operative systems, virtualization), but not due to the memory size, although more memory lets a more complex software implementation. It is the structure of the systems what provides a lot of complexity instead of the uncertainty of a very error prone programming language.

If there is a relationship between complexity and size it should be better defined by the Moore’s Law, that established the exponential evolution of the number of transistors integrated in a single microprocessor chip.

Moore's Law. Photo Credit Wgsimon

Graph 1 Moore’s Law on number of transistors in a microprocessor by Wgsimon (Source Wikipedia)

We say that something is complex when it can evolve in an unpredictable way under an unexpected event.  With an assembled program in a microprocessor with an accumulator register and two index register things are very hard to implement but errors could be easy to identify, but how can we know what is wrong in a system with dozens of processes that are be written by several programmers each, with different software libraries written in different computer languages, running simultaneously in a quad-core microprocessor, connected to several networks and fed with data from hundreds of computers, if it hangs suddenly?

Of course, system administrators have tools and techniques to identify those problems when they have happened, for instance, some wise computer scientist invented logs, but it would be more useful to predict some faults before they occur specially in certain industrial systems.

I want to notice that not all this complexity is bad if it is well managed, namely, if the system is designed for simplicity. For instance, in a modern computer, when a process hangs not all the system stops, only one of the cores, the system will run slower but other critical task can continue without produce a collapse at the whole system. The collapse will depend on the importance and the links of that process to other processes running at the system. Other process could analyze the system automatically and find that there is a hanged process and stop it permanently or restart it in order to provide a better performance. In a similar way, the complexity of the system could be measured through some control variables in order to anticipate a malfunction.

Sometimes complexity can be attacked with complexity, as in a war, fire can be the best answer to fire. This is not so odd; in order to reduce complexity and stabilize a system, we can use a controller device. Controller devices reduce the number of the most probable states of a system, but they can be complex systems itself, of course they must be tuned to do a certain task properly. Thus, the activity of complexity management can be seen. We must design and tune some kind of complexity control device in order to avoid that the system can collapse under unexpected events.

In the last example, we would take advantage of a complex multicore microprocessor to run a critical process isolated from the rest of the system that in some occasions can be a good strategy to preserve the system in a state of proper performance. Here, hardware complexity provides less software complexity because software has been developed in order take advantage of it. Complexity management is not simply a matter of trying to reduce the structure or the uncertainty; it is a matter of making that the system is working far from its maximum complexity. Any complex systems should be always designed from a complexity viewpoint.

A software system can have a few code lines and be very unstable, and another one can have a lot of code lines and be very robust if it was designed thinking in complexity. Unfortunately, practically no commercial computer system is working today fully designed from a systemic viewpoint, as hardware providers are different to OS providers, and these ones are different to application providers. In any  system applications from many different sources can be running, and this fact makes more important that this complexity is taken into account by IT managers.

 

 Azul Rojo 0003

 

Mr. Luis Díaz Saco

Executive President

saconsulting

advanced consultancy services

 

   Nowadays, he is Executive President of Saconsulting Advanced Consultancy Services. He has been Head of Corporate Technology Innovation at Soluziona Quality and Environment and R & D & I Project Auditor of AENOR. He has acted as Innovation Advisor of Endesa Red representing that company in Workgroups of the Smartgrids Technology Platform at Brussels

The effect of technology development in the world economy

fiber optics

fiber optics (Photo credit: go_nils)

Nowadays, there are many possibilities in order to select places for investing; however, there are not a lot of good analysis of the evolution of different economies and the reasons that are driving them to different performances. Financial markets are usually influenced by different piece of news related to innovation and technologies instantly, but are those pieces of news so important on the long term?

Imagine that I want to start a new business based on technology and I want to select a good place to do it, where will it be better based, in USA or Europe? Looking at the evolution of both economies we can see that they have a similar progress.

GA201303 G1

Graph 1 Evolution of the GDP of USA and the Euro Area in billion dollars. Data Source OECD.

As we can see the production volume of the USA is similar to the production volume of the Euro Zone. We can operate with a common currency in a market of 2,500 billion dollars being based on a so rich state as California or in a weaker country as Italy or Spain. Of course, currency is not the only thing that we must consider, but we must not forget that weaker places can provide better prices for many infrastructures and services than other stronger ones. Avoiding political matters and other more important as taxes and legal protection, I want to analyze the technical level of both economies. A way to do it is looking at its efficiency.

GA201303 G2

Graph 2 GDP per capita in dollars. Data Source OECD.

A higher GDP per capita is showing that the American economy is more productive in terms of population because it is providing more production per inhabitant. The reasons for this fact can depend on a different use of production factors, for instance, different access to national raw materials, a different technology level, or a more efficient financial system and public sector. It does not depend necessarily on a good or bad behavior of the work factor.

In order to analyze those aspects we can center the analysis on the European Union now.

GA201303 G3

Graph 3 GDP of the five main Euro economies. Data Source OECD.

GA201303 G4

Graph 4 GDP per capita of the main Euro economies. Data Source OECD

As we can see there are not significant differences among France, Italy and Spain, and Netherlands seems to have a better behavior than the other countries when we are analyzing the economies in relative terms. Netherlands has shown a strong financial system, but what can we say about its technology level?

GA201303 G5

Graph 5 Total Broadband Subscribers per 100 inhabitants. Data Source OECD.

Spain and Italy had less diffusion of the broadband technology than France and Germany, and Netherlands seems to be near the maximum diffusion. But is there a direct relationship between communication technology diffusion and production? We can see it at the next graph.

GA201303 G6

Graph 6 GDP to Broadband technology subscribers. Data Source OECD.

There is a linear relationship in the case of Netherlands (flat curve) between production and IT broadband technology subscribers. Italy is a country that can produce more with the same broadband subscribers; this fact is showing that Italian economy is not as IT technology dependent as other economies. On the other hand the production of Germany can be becoming more dependent of IT technology because it needs for the same production much more broadband subscribers. We can see how Spanish economy has become similar to the Dutch one in terms of total production over broadband subscribers although Netherlands has more subscribers per 100 inhabitants.

Until now, we have analyzed the effect of the integration of developed technologies in different countries, but what happens at the innovation chapter?

GA201303 G7

Graph 7 Gross Domestic Expenses in R&D. Data Source OECD.

In this case, we can see how only Germany has a similar effort in R & D as United States. This is probably a main reason why United States has shown to be more productive than the Euro Area.

As conclusion I would like to point out the following:

          The efforts in R & D are a very important driver of a strong economy.

          To incorporate technologies is clearly an extremely good way to improve the economy of a country.

          A lot of use of a certain technology does not imply always higher production because production requires several production factors.

          Although a certain technology is not very extended in a country, that country can have developed it yet. Development of technologies and infrastructure investment for a technology deployment are different aspects of innovation.

          Telecommunication companies usually are multinational; the deployment of the technology in a country can be done by a company based in another one.

 

 Azul Gris 0002

 

Mr. Luis Díaz Saco

Executive President

saconsulting

advanced consultancy services

 

  

 

Nowadays, he is Executive President of Saconsulting Advanced Consultancy Services. He has been Head of Corporate Technology Innovation at Soluziona Quality and Environment and R & D & I Project Auditor of AENOR. He has acted as Innovation Advisor of Endesa Red representing that company in Workgroups of the Smartgrids Technology Platform at Brussels

 

October 2012: Must the future of a country be qualified only by its public debt?

Primary Mirror Segment Engineering Design Unit

Primary Mirror Segment Engineering Design Unit (Photo credit: NASA Webb Telescope)

It is absolutely true that if a country has a lot of debt, it must pay it sooner or later. Searching for the future picture, an excess of public debt can be analyzed in terms of a vicious circle of more taxes, less consumption, less production, less public incomes and more taxes again. This scheme can seem useful for the simple minds, but all of us know that economy is much more complex than this.

Why economy is so complex? The reason is that there are a lot of factors involved, these factors cannot be easily measured as mathematical variables, if we measure some of them, there are a lot of links among the variables that we have chosen, there is not a perfect mathematical model of these relationships, and there is a lot of uncertainty about the way that rulers will make decisions in the future and the consequences that will produce these decisions.

But, please, let us go back to the vicious circle. Looking at it, we can find that there is only a way to find an exit. Governments only can act on one of the phases. Consumption, production and then public incomes are a decision taken by the citizens. Rulers only can act on taxes, following that scheme; the only way to abandon the vicious circle is reducing taxes, and then, if there is a debt problem, reducing public expenses in order to be able to do it is the best way. A country that reduces public expenses and taxes always should be considered a good investment.

It seems easy, does not it? Theoretically yes, but it is not as easy implementing it in practice, because the apparatus of the state has a lot of inertia especially where it is heavy. Private economy is always much more adaptable. On the other hand in many occasions the economy can be in a certain working point where this assumption of reducing expenses cannot provide the expected effect. Following this scheme, it results very strange that many experts consider a good piece of advice increase consumption taxes to reduce debt. This would be a good action only in a country where labor was better paid than other production factors and not, for instance, in country with a high unemployment. When results come, I suppose that, as usual, the same advisors turned into analysts will say that the government of the country had not taken the proper decision with its policies.

The main reason to discard classic economy models is based on technology. Most of economy models have not considered the variable technology because is not easily measured and managed. We can include the technology level as a variable in an economy model, but how can we obtain a useful and controllable measure?

I will try to illustrate this with an example. For instance I can propose an initial indicator to measure some aspect of the technology level useful for the economy of a country: The Business Enterprise Researchers per capita. And I am going to compare the four more decisive Euro economies.

Graph 1 Business Enterprise Researchers per capita of the main Euro economies. Data Source OECD

There is a visual perception of the difference between the productive technology level of Northern and Southern countries. Including public researchers the situation can be different but now I am interested in the short-term and mid-term evolution of the economy and the private researchers can be a better indicator for analyzing the evolution of productive sector in those cases.

This graph is showing only an aspect of the technology level, it is related to innovation production, but technology level can be provided through technology transfer.

OECD has not provided the full set of piece of data about France and we must forget it for the next analysis.

Graph 2 Technology Balance of Payments of three Euro countries in 2009. Data Source OECD

We can see that Germany is a net producer country of technology, Italy is acquiring more technology that produces and Spain is practically producing the same technology that is acquiring outside. But we can go further in order to analyze the reasons of that position of Italy.

Graph 3 Analysis of the Technology Transfer through the Technology Balance of Payments. Data Source OECD

Technology balance of payments is not a good variable to analyze the technology level because a negative balance is showing simultaneously that a country needs technology (lower technology level) and it is acquiring it (it has increased its technology level). Anyway, we can find some interesting issues here:

          Germany has been a net consumer of technology before 2003, and now it is a net producer of technology.

          The world economy crisis of 2007 due to the subprime mortgages in the USA produced an inflection point in the European technology market. Germany and Spain improve the balance but Italy gets worse.

To complete this analysis we should probe if the zero balance of Spain is due to an increase of the production. As we can see in the following graph:

Graph 4 Causes of the evolution of the Spanish technology balance of payments. Data Source OECD

As we can see the receipts were increased in relative terms while the payments remain in a similar value. This is showing a good behavior of the technology production Spanish sector during the first years after the subprime crisis.

As conclusion I propose to think about a few issues:

          Usual economic models do not consider well the importance of technology.

          It is questionable to accept that the risk bonus of the public debt is related to the real growth potential of an economy.

          Spanish economy seems to have been driven by wrong financial decisions on public expenditure and banking instead of ruling decisions for promoting its best competences on productive activity acquired during the last years.

Mr. Luis Díaz Saco

Executive President

saconsulting

advanced consultancy services

 

 

Nowadays, he is Executive President of Saconsulting Advanced Consultancy Services. He has been Head of Corporate Technology Innovation at Soluziona Quality and Environment and R & D & I Project Auditor of AENOR. He has acted as Innovation Advisor of Endesa Red representing that company in Workgroups of the Smartgrids Technology Platform at Brussels

 

Saconsulting’s Research September 2011: Comparative Analysis of Information Technologies in Europe

saconsulting has prepared a comparative analysis of the use of the information technologies in Europe with data obtained from the Spanish Statistics National Institute (INE).

This work analyzes the future trends of an IT-based economy. It tries to find the countries where we can expect new business opportunities related to these technologies.

The first index is showing the capability of a country to take advantage of IT. With this indicator we can analyze the knowledge of the people and the availability of computers to get product and services from the internet.

Graph 1 Number of people that have not used a computer last year, in percentage.

In order to provide innovative IT services it is necessary that people have access to a computer. As you can see, there are some differences among European countries; more than 80 percent of the people of bigger economies (Germany, France, UK) have used a computer last three years.

Ireland and Spain have overcome an interesting 70 % in 2010, on the other hand, Italy has not a good indicator yet because a 40 % of the people have not used a computer in 2010.

Another interesting indicator can be the use of IT in order to buy product and services. This indicator will show us the real introduction of IT based business in the economy of a country.

Graph 2 People that have acquired goods and services from the Internet

This graph should be the reverse version of the former one, but we can see some differences. Although there is a large introduction of electronic commerce in Germany and France, for instance, we can see that the development of these businesses in Spain is lower than the expected one. It should be nearer the Irish one than the Portuguese one.

The growth of this indicator is following a linear behavior. We can expect that the electronic commerce maintains an increasing behavior in the future.

The Spanish economy has a large potential of growth from IT because of the low introduction of the electronic commerce and the sufficient access to this kind of technologies.

saconsulting commercializes products and services related to innovation management activities and information technologies, in the sectors of industry and utilities, finance and services, and public administration, creating value for its clients with the most advanced management tools.

 

Saconsulting and its logo is a trademark of Saconsulting Servicios Avanzados de Consultoría S.L.U.

 

© 2011 Saconsulting Servicios Avanzados de Consultoría S.L.U. All rights reserved