Saturday 7 April 2018

Integral Technology in Blockchain, Cryptocurrency and Beyond – a concept note for discussion

by Jem Bendell and Matthew Slater

The billions of dollars of venture capital pouring into blockchain start-ups over the past year reflect how people with a serious financial interest in technology see significant potential in distributed ledger technology (DLT). Yet the actual use of these technologies for everyday applications is still rare. Some say that it is a passing fad. Others say that blockchains and cryptocurrencies like bitcoin are dangerous to our financial system, our security and the environment. How should we navigate this new sector: as innovators, advisors, regulators, or just as informed citizens?
Deepmind's AI interpretation of Escher's famous hands

In this concept note, prepared as background for our article for the World Economic Forum, we explain how approaches to blockchain and cryptocurrency need to be grounded in a clear appreciation of the relationship between technology and society. That clarity is important not just for discussions on blockchains and cryptocurrencies, but for all software technology, as it becomes so powerful in our lives. We will therefore develop a lens, called “integral technology,” to assess the positive and negative aspects of any technology and apply this to recent innovation on the field of distributed ledgers. 

When we hear people comment on blockchain and cryptographic currency being good or bad, we are often hearing different assumptions about the relationship between technology and society. So first, let us review the various ways that people look at that. The Oxford English dictionary defines technology as “The application of scientific knowledge for practical purposes..." That is different to how the word is typically used to refer to the “artefacts” - or things - of technology, such as the arrow head, the mobile handset, blockchain, or nuclear missile. By describing both “application” and “practical purposes” the dictionary suggests that technology is best understood as a system of intentions and outcomes. That system involves people, knowledge, contexts and the transformations that are involved in creating those artefacts. These are what we identify as the five aspects of any technological system, which is what we will mean when we refer to a technology in this concept note. The power of this systems perspective on technology is that it invites us to consider further the wider context of politics, financing, iterative redesign processes, the side effects and finally the values that shape technologies. Which is what we will do now.

Is Technology Something to Love or Fear?

We humans attach a great deal of importance to technology because it seems to be able to meet many of our needs and desires. It brings aspects of our imagination into physical reality in ways that then reshape our lives and what we might imagine next. This utility of technology makes selling it very possible, but also means there is less emphasis given to the costs and consequences of those desires being met in those ways.

Given its centrality in civilisation, a range of perspectives on our relationship to technology have arisen. Some optimists believe any negative consequences are worth the benefit, and that the march of technology is synonymous with the march of human progress. This view is called “technological optimism”. Others believe that technology takes humans further from their natural state, isolating them from the world, and causing numerous new problems which often require further technological solutions. These “technological pessimists” can point to a range of dangerous situations such as nuclear waste, climate change and antibiotic resistance, to then question the hubris that humanity may have exhibited in thinking our technology meant we can exert influence on nature without an eventual response of equivalent impact on ourselves. The German philosopher Martin Heidegger argued that modern technologies have a quality of seeking to dominate nature rather than work with it, in ways that stem from - and contribute to - the illusion that humans are separate agents acting on nature.

Some of these optimists and pessimists don’t think that we humans have much influence on what is happening. Such “technological determinism” is the view that technology can be understood as having a logic of its own and develops as an unfolding of consciousness in ways that we, our entrepreneurs or our politicians, will not, in principle, control. Current debates about the merits or risks of blockchains and cryptocurrencies often echo these perspectives. Some argue it will change, or even save, the world. Others argue that it will collapse the financial basis of our nation states. Still others argue that whatever our view, it IS the future - as if it cannot be stopped.

Counter-posed to these views on technology has been the “technological neutralist” view which suggests that technology is neither inherently good or bad for humanity and therefore needs responsible management to maximise its intended benefits and minimise its unintended drawbacks. That view is the most widespread in the field of Science, Technology and Society (STS) studies. Sociologists have revealed as pure fiction the apolitical view of technology development as flowing from basic science, to applied science, development, and commercialization.  Instead, a variety of relevant stakeholder groups compete to influence a new technology and they determine how it becomes stabilised as an element of society.

Therefore, despite the pervasiveness of “great man” stories in our culture, technological innovation is not the result of heroes introducing new ‘technologies’ and release them into ‘society,’ starting a series of (un)expected impacts. Rather, innovation is a complex process of “co-construction” in which technology and society, to the degree that they could even be conceived separately of one another, negotiate the role of new technological artefacts, alter technology through resistance, and construct social and technological concepts and practices.

We share this perspective on technology. It invites us to see how innovation is a social process that we can choose to engage in to achieve public goals. We are not, however, “technology neutralists”, for a few reasons. First, we do not believe that all technologies have the same level of negative or positive potential prior to their human control. That is because all kinds of different phenomena exist under the one banner “technology”. For instance, while nuclear fission constantly produces poisons which require millennia of custody, smart decision-making algorithms only impact the world insofar as their decisions are acted upon. Second, we do not assume humanity to be the autonomous agent in our relationship with technology. Rather, we are influenced by the technologies that shape the society we are born into. Canadian philosopher of technology, Professor Andrew Feenberg explains this situation as humans and technology existing in an entangled hierarchy. “Neither society nor technology can be understood in isolation from each other because neither has a stable identity or form” he explains.

For us, “technological constructivism” is the perspective that technology and society influence each other in complex ways that cannot be predicted and therefore require constant vigilance by representatives from all stakeholders who are directly and indirectly affected. The implication of this perspective for innovation in blockchain and cryptographic currencies is that the intentions of innovators and financiers are important to know and influence, and that wider stakeholder participation in shaping the direction and governance of the technology is essential. This is the approach that we base our view of developments in software in general and blockchains, in particular.

The Technological State of the World

Humanity faces many dilemmas today. Some of these are brought about by our technology, some are not, and we may hope many can be solved by a sensible use of technology in future. Climate change is the result of our rapid use of technologies to burn fossil fuels and tear up forests. Malnutrition is the result of a wide array of factors, which are difficult to blame on technology, though its persistence despite the “green revolution” would make technological optimism a questionable position today. 

One field of technology which may be exceptional with regard to regulation and the lack of it is Artificial Intelligence (AI), which describes the ability of computers to perceive their environment and determine an appropriate course of action. Narrow forms of AI are already in use. They often confer a tremendous advantage to those who use it well, and its use by the victorious Trump campaign, and the victorious Leave campaign (of the Brexit referendum) are raising huge questions about the justice of using people's own data to manipulate their voting intention. AI systems tend to be very complicated and sometimes produce unexpected results. But because they save labour, for example by automatically judging loan applications or driving vehicles, there is commercial pressure to simply accept the automated decisions to reduce the costs. As AI is applied to more and more areas of trade, finance, military and critical infrastructure, the risks and ethical questions proliferate.

There are more intense concerns being expressed recently about more general forms of AI that include capabilities for software to be self-authoring. That does not mean consciousness, nor mimicking consciousness, but that overtime the software could develop itself beyond our understanding or control. It could 'escape' from a laboratory setting, or within specific applications, and disrupt the world through all our internet-connected systems. Astro-physicist Stephen Hawking said "The development of full artificial intelligence could spell the end of the human race. Once humans develop artificial intelligence, it will take off on its own and redesign itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn't compete and would be superseded." Some even fear that, a rogue AI might only be disabled by killing the whole internet. Combined with the resilience of blockchains, which cannot be switched off at any one place, this possibility is a step closer. This potential existential danger invites a new seriousness about software regulation. But our concern in this concept note is more with the way machines in the service of powerful organisations are already shaping certain aspects of our lives with little accountability and that the field of AI is almost completely unregulated.

Introducing the Concept of Integral Technology

Given these problems, it is self-evident that humanity needs a better approach to technology. How might we frame that approach? Concepts of ethics, responsibility and sustainability have all been widely discussed in relation to technology. Given our systems view of technology, we find Integral Theory to provide a simple prompt for considering its implications for society. It invites us to question internal and external impacts of any system and its embeddedness in wider systems. We are going to propose that humanity needs to develop a more consciously integral approach to the development and implementation of technology. Key to this concept is that technologies need to be more internally and externally coherent. Internal coherence describes how their design does not undermine the intention for their creation. External coherence describes how their design does not undermine the social and political system that they depend upon and which holds technologies and their protagonists to account, as well as the wider environment upon which we all depend. As that social and political system would be undermined by increasing inequality, so the effects of technology on equality are important to its integral character.

To aid future discussion, here we outline six initial characteristics of such integral technologies.

1) Meaningful Purpose: The technology system is the result of people seeking to provide solutions to significant human needs and desires, rather than exploit people for personal gain. A positive example is the development of technologies for cataract operations that can be offered affordably for the poor. A negative example is the development of financial algorithms to front run stock market trading.
2) Stakeholder Accountability: A diversity of stakeholder opinions are solicited and used during technological development and implementation in an effort to avoid unexpected and negative externalities. A positive example is the cryptocurrency Faircoin for which everything is decided through an assembly; a negative example is bitcoin, in which computer mining stakeholders approve or veto new features based on their interests in maintaining power and profit. 
3) Intended Safety: A technology does not cause harm when used in the intended ways, and those using it in unintended ways are made aware of known risks. A positive example is the indications and contra-indications on pharmaceutical labels; a negative example is when pesticides are marketed to be used just before the rice or grain harvesting to increase the yield, when that increases likelihood of toxic residues.
4) Optimal Availability: As much of the knowledge about the technology as safely possible is kept in the public domain, in order to reduce power differentials and maximise the benefits of the technology when other uses for the technology are found. A positive example is open source software which allows anyone with the right skills to deploy it for any purpose they choose; a negative example is the ingredients of cigarettes which are not published and make it harder for affected parties to build a case against the manufacturers.
5) Avoiding Externalities: The way in which the artefacts of the technology affect the world around them are considered at an early stage and actively addressed. A positive example is the design of products to use a circular flow of materials from the Earth and back to the Earth. A negative example is how addiction to computer games may be contributing to obesity in the young while the games companies continue to pursue similar goals.
6) Managing Externalities: Subsystems for mitigating known negative externalities are developed at the same time as the technology and launched alongside it. A positive example is the system of regulations that mandate regular physical inspections of aircraft. A negative example is government migrating social service administration to the internet and not ensuring the poorest have the computer access, skills and support they need to use the new system.

Integral Blockchain and Post-Blockchain Technologies

In the past year Bitcoin has been criticised for the huge amounts of energy it consumes to secure the blockchain. At the time of writing, some compare the consumption to that of Switzerland. Such consumption is not a necessary feature of securing blockchains, but the initial design choice of the inventor, with a system called “proof of work” being used to issue new digital tokens. Other systems like Ethereum also use “proof of work” and are similarly reliant on the computer-mining companies for whether this climate-toxic code is replaced. Sadly the “proof of work” systems of these leading technologies remain. Whereas some proponents of these technologies argue that they are not so environmentally bad, due to servers being located in cold places near renewable energy sources where energy is wasted, these are somewhat defensive post-hoc excuses. Clearly the environmental appropriateness of their code was not one of the design parameters in the minds of the designers. 

In the case of Ethereum, the speculation in the price of Ether affects the price of Gas which is used to process transactions. That means that as the price balloons, the system loses its attractiveness for supporting activities that are high volume and low cost. It also transfers funds from the many who would use the system to the few who speculate on digital token value or own the computer-miners.

We contend that systems which are not internally coherent will eventually experience a disintegration of their intended or espoused purpose. In addition, systems which are not externally coherent will eventually experience a disintegration in their public support and their environmental basis. The situation with Bitcoin is probably unsolvable, and its carbon footprint may lead to significant regulator intervention in time. Ethereum has a wider set of aims and so despite the continual delays in moving substantially away from Proof of Work, it may still be able to address the barriers to progress presented by the short-term interests of those controlling the mining computers. However, there is no doubt that this form of governance-by-hash-power is currently an impediment to Ethereum becoming a more integral technology.

Given these difficulties, we would like to point out some lesser-known projects, which we regard as showing exemplary integral traits.

Providing the same smart contract functionality as Ethereum, the new Yetta blockchain is intended to be sustainable by design, with the low energy requirements of its codebase being moderated further by automated rewards for those nodes using renewable energy. It will also enable automated philanthropy to support the Sustainable Development Goals (SDGs).

Also dissatisfied with how both proof-of-work and proof-of-stake consensus algorithms reward those who already have the most, Faircoin developed a ‘proof-of-cooperation’ algorithm. More than that, there is an open assembly in which the price of the coin is determined every month. This also is an attempt to stabilise the price of the coin and deter speculators and the erratic price movements which arise from their profiteering. They hold that a medium of exchange is not supposed to be a vent from which value can be extracted from the economy.

One post-blockchain project, Holochain, is currently raising capital in an Initial Coin Offering (ICO). The communications team has made many criticisms of conventional blockchains. For example they have massive data redundancy built in, which causes such a problem for scaling that the original intention of these projects is now being compromised with such innovations as the Lightning networks. Another being that since blockchain tokens are assets without liabilities, they cannot have a stable value and thus constitute a poor medium of exchange. Holo tokens therefore are issued as liabilities, which means they have a purpose and a more stable value as long as the project lives.

“If someone tells you they’re building a “decentralized” system, and it runs a consensus algorithm configured to give the people with wealth or power more wealth and power, you may as well call bullshit and walk away. That is what nobody seems willing to see about blockchain.” - Art Brock

Another project called LocalPay, which we both work on, seeks to build a payment system for existing solidarity economy networks. Its protagonists believe that payments infrastructure is too critical and too political to be put only in the hands of monopolists and rent-seekers. Instead, infrastructure which is held in common, equally available to all, is the basis of a fairer society. They too, understand money as credit, with somebody always underwriting its value.

While none of these technologies is perfect, they are Integral Blockchains and post-Blockchains as they seek to be internally and externally coherent. The internal coherence of a Distributed Ledger Technology (DLT) means that the code and business model does not undermine the intention for their creation. External coherence of a DLT means that their code and business model does not undermine the social and political system that they depend upon and which holds the technologies and their protagonists to account, as well as the wider environmental system upon which we all depend. As that social and political system is undermined by increasing inequality, so the effect of a DLT on equality is important to its integral character. The four projects we highlighted all seek to integrate these considerations into their codebase and business model, rather than bolt on social or environmental considerations at a later time. 

The Need for Technosophy

Concerns about technology are growing. Warnings over unregulated nanotechnology and artificial intelligence are now widespread. Warnings about the socially and politically damaging effects of social media are growing. There’s a wider problem with how technology is financed and implemented in a free market system that means technology companies’ first duty is to deliver short term profits to shareholders. This means many technologies are developed in a hurry and much software is rushed to market before it is even finished. Many costs and negative impacts are hard to pin directly on the manufacturers, and thus sometimes nobody is accountable. The history of technology is one where resistance to development from society leads to stabilisation around control and access to technology. Recently we have had massive diffusion of new electronics such as the mobile phone and social media, while the systems for affected stakeholders to hold these technological systems to account do not yet exist in the ways they have done in other sectors.

The law is supposed to provide for unanticipated victims of technology and thus incentivise providers to take precautions. This clearly isn’t working nearly well enough perhaps because of the difficulty and expense of using the law and perhaps because some consequences are very hard to prove to the satisfaction of a jury. You may recall the decades of failing to prosecute tobacco companies because the link between cigarettes and lung cancer could not be proven easily. So if the law were better to favour the victims, then technology companies would do more to research and mitigate the secondary effects.

We will not be surprised if legal action will begin to be taken against platforms like Facebook on behalf of millions of claimants for a range of concerns. That might involve teenagers with clinical depression that has been correlated with social media usage, or relatives of those who then committed suicide. Companies like Facebook may point to their internal systems to address such risks, and whether that is sufficient may be debated in court sometime in the future. Such legal action may bankrupt some firms, or trigger changes. But to achieve a wider shift to more integral technologies there will need to be a shift in philosophy that the law alone will not be able to compel.

It is time for a new era of wisdom in the way we make and deploy our tools. A move from the knowledge of making things to the wisdom of making things – what we call an era of “technosophy”. In the field of digital technologies, this means the urgent development of new forms of deliberative governance, that uses both soft and hard forms of regulation. The forms that this will take need to be developed, but there are many examples from other sectors, where technical standards are agreed internationally and incorporate into national law. That would need to be done in ways that shape not stifle digital innovation, but also enable stakeholders to alert regulators to risk-laden projects, such as those using AI.  

One idea might be to introduce a requirement that before software technologies can be deployed by large organisations (over 200 employees OR over 50 million USD turnover, with subsidiaries analysed as part of their parent companies), the software needs to be certified by an independent agency as not presenting a risk to the public. Such certifications could be based on new multi-stakeholder standards that would establish management systems for responsible software development. Any change of the software code that would be deployed by a large firm would need to be notified to the certifier of the underlying software before release, with a self-declared risk assessment, based on guidance provided by the standards organisation. Systems would need to be established for determining whether particular software types and uses pose heightened risks and require more oversight. For this approach to work it would have to be worldwide, so as to avoid firms moving to jurisdictions that avoid these regulations. Therefore, there is a rationale for an international treaty on software safety to be negotiated rapidly with significant resources marshalled to help these regulations to be appropriately implemented globally.

In developing this idea, we know that many protagonists in software innovation may be appalled. There is a strong anti-authoritarian mood amongst many computing enthusiasts. But it is time to realise that some technology optimists are becoming the new authoritarians, by enabling the diffusion of technologies that have wide effects on people worldwide without them having any influence on that process other than one role - if they can be a consumer. The challenge today is not whether there should be more regulation of software development and deployment or not, but how this should be done to reduce the risks and promote the widest human benefit. We offer the concept of Integral Technology as one way of helping that debate (and not as a template for regulation).

Unfortunately, in the hype and the reality around Distributed Ledger Technologies (DLTs) we don’t see many ideas and initiatives thinking beyond the initial value proposition and promised returns to investors. Some technologies like Bitcoin seem to us to have betrayed all the aims of the founder and early adopters, yet claims of internal and external incoherence are met with very questionable objections by their near fanatical adherents. The various projects to promote social or environmental good appear to be marginal to the main thrust of this sector, and many add such concerns on top of existing code and governance structures that are not aligned with the project goals.  On the other hand, incumbent banks and their regulators have often express dismissive or negative views of DLT technologies which suggest they do not understand the problems with existing bank power and practice, or the potential of DLTs. In some countries outright bans on DLTs or cryptocurrencies are not the result of wide stakeholder consultation on questions such as what and for whom systems of value exchange should be for.

Therefore, we believe a technosophical approach to blockchain and cryptographic currencies is currently absent and needs cultivation. It is why we urgently need more international multi-stakeholder processes to deliberate on standards for the future of software technologies in general. In the field of blockchain, one event that may help is the United Nations’ half day high level discussions on blockchain, taking place at the World Investment Forum in October. Whether wider political and environmental conditions will give humanity the time and space to come together to develop and implement an appropriate regulatory environment for the future of software is currently unknown, but it is worth attempting. 

--

We provide a background to blockchain and cryptocurrency innovation in our free online course on Money and Society.

We also offer a Certificate in Sustainable Exchange, which involves a residential course in London (next April).

Our academic research on these topics includes a paper recently published on local currencies for promoting SME financing, a paper on thwarting a monopolisation of the complementary currency field and a paper on our theory of money, published by the United Nations.

Professor Bendell is the Chair of the Organising Committee of the Blockchains for Sustainable Development sessions at the World Investment Forum 2018 at the UN.   

We produced this concept note on the IFLAS blog for rapid sharing. To reference this Concept Note:
Bendell, J. and M. Slater (2018) Integral Technology in Blockchain, Cryptocurrency and Beyond, Institute for Leadership and Sustainability, University of Cumbria.

The image used in this post is a reworking of Escher's drawing that reflects the entanglement of author and authored. The image was reworked by Google AI project Deepmind, in its "dream" state, to produce the image you see. Deepmind is learning to identify the contents of images. This technology will be used to save lives, sell stuff and to kill with impunity. Reworking Escher's hands in a rather bizarre fashion reflects our perspective of "technological constructivism" and our belief that the potential of AI to soon achieve (with human action and inaction) autonomous general super intelligence (amongst other dilemma, particularly climate change) means that we need a "technosophical" approach that more wisely assesses and governs technology systems.  

Send comments to drjbendell at gmail

No comments:

Post a Comment

Note: only a member of this blog may post a comment.