ICT indicators for advocacy

1. Introduction

During the course of the last 15 years, information and communications technology (ICT) indicators have become increasingly popularised and prominent in mainstream discourses. In the advocacy arena, indicators provide the groundwork for effective lobbying and policy-making at different levels of mobilisation. To address inequalities in access to ICTs – what is commonly referred to as the “digital divide” – it is essential to identify where there are inequalities, and how exclusion is manifested, in order to specifically target solutions. Some solutions may be purely technical, such as extending infrastructure to rural communities. However, indicators can also help policy advocates and policy-makers to assess how likely different communities are to integrate ICT into their work and social trajectories – what is commonly referred to as e-readiness. Indicators, while useful, are not neutral. This chapter considers ICT indicators and seeks to clarify practices around designing and using indicators for measuring progress towards a global information society.

A robust set of indicators is difficult to achieve; you have to have commitment across countries and stakeholders who agree that the exercise is useful. There also needs to be agreement on the indicators to be collected, which is a shifting terrain in terms of what is perceived as useful information. Traditionally, telecom sector indicators (and the collection of statistics used to construct them) have focused on physical infrastructure. This made sense in the historical context of monopoly provision of telecom services. There was only one service provider to collect information from, and there were only two classes of users (household consumers and business users). Common carriage guidelines meant that what was going over the “twisted pairs” was not an object of analysis, which merely focused on traffic data. The only experiential data of note were quality of service indicators, which actually relate to technical service provision rather than the personal level effectiveness of the call.

However, as is becoming increasingly evident, it is not terribly meaningful to study telecoms as stand-alone infrastructure. Communication technologies are very much intertwined with human capabilities and motivations. This becomes apparent with surprises in uptake such as occurred with mobile, prepaid and short message service (SMS), and more recently with wireless communications and internet diffusion. These examples illustrate the dependence of ICT infrastructure on social relations, as well as the need for ICT indicator projects to extend their inquiry beyond access to encompass usage and adoption, and also impact of the new technologies. Historically, and even today, ICT indicators overwhelmingly focus on infrastructure and connectivity – in other words, how many phones are in use, rather than who is using them for what. This chapter argues that we need to have a clearer picture of demand side conditions and use. Indicators that inquire into the nature of use and usage conditions will provide equally important information for informing policy decisions, and will certainly clarify the picture created by connectivity and technical components.

Finally, a word about divides and globalisation. Globalisation and technological change have opened up new paths for communication and information flows, but these are cut short by the dead-ends of “digital divides”. Economic and social divides have always existed and many argue that the prevalent technological divides of the early 21st century are predominantly an extension of already existing, historical exclusion. Especially in the context of the information society, divides are fundamental to our understanding and use of indicators. In essence, divides are really what indicators are about: assessing where there are people who have fewer opportunities to improve their lives or their family (or community) livelihoods, and have a lower quality of health, education and life than is deemed acceptable – as defined in international treaties and conventions. If we are not assessing how to bridge gaps, or to make even better bridges for such gaps, then we are likely assessing the terrain for provision of service strategies for those who already have access and are not marginalised.

This chapter is organised as follows: it begins with an overview of indicator sources, followed by a brief discussion of what indicators are, why we use them, and what they purport to represent. This in turn is followed by a consideration of the data that is used to make up indicators, and then a section which discusses indicators' inherent biases and unpacks some issues around their use. The chapter concludes with a call for further cooperation around indicators across the different stakeholder groups.

2. Key sources of ICT indicators

Most advocacy initiatives and research projects do not undertake the challenge of new data collection to devise their own ICT indicators. However, for different advocacy moments, we still need statistical information from legitimate and recognised sources. This section briefly identifies the organisations that currently have significant stocks of ICT indicators available to the public for free or at a nominal cost. Whether the entity collecting data has the sufficient resources, legitimacy and mandate for such an undertaking are also important to consider. There is no shortage of ICT indicators sources, and there are also strong overlaps with measurement of other sectors that are being transformed by the use of ICT – economic, poverty and governance assessments, health, education, etc.

Many international organisations such as the World Economic Forum and UNESCO’s Orbicom produce reports with indicator collections that are either devoted specifically to perspectives on the ICT terrain at national and regional levels, or which use ICT indicators in the context of a broader assessment, such as the UN Development Programme (UNDP) Human Development Index.[2] Historically, the International Telecommunication Union (ITU) housed the mother of all ICT-related indicator collections, which it makes available in printed reports, at its website and in databases. The ITU also figures prominently in high level initiatives to achieve consensus around which indicators should be collected and how to build better indicators in order to better understand ICTs and their impact on society and more effectively assess and measure their diffusion and absorption across the world. The ITU[3] maintains roughly 80 sets of ICT statistics that are made available via its website or print publications and CD-ROMs. The ITU’s Digital Opportunity Index (DOI) draws upon eleven of these indicators to provide a composite measure and ranking of nations' ICT capability. The ITU served as the host secretariat for the World Summit on the Information Society (WSIS). During the first phase of the Summit (2003) the theme of indicators was highlighted and the seeds were planted for establishing the multi-stakeholder entity, Partnership on Measuring ICT for Development, and for the DOI.

The World Bank[4] collects hundreds of indicators across a number of different sectors and maintains these in different databases available at their website. The ICT at a Glance pages offer 27 ICT-related indicators, but other sectors such as health and education also have ICT-related statistics. The Knowledge Assessment Methodology (KAM),[5] initiated by the World Bank Institute, works to resolve which indicators are central to assessing the new economy and uses more than 80 of them as the basis for the Knowledge Economy Index (KEI); of these, 12 are specifically ICT-related indicators. A knowledge economy will be characterised by an educated and skilled labour force, an effective innovation system, adequate information infrastructure and conducive endowments in terms of economic and institutional regimes. The KAM illustrates some of the complexity in assessing the ICT terrain and contributions to socioeconomic improvements at a national level as elements of ICT adoption and access traverse these different domains. It has been argued that in past years the World Bank, seeking to demonstrate the effectiveness of Washington consensus policies, has made choices that skew indicators in favour of this perception. As discussed below, all indicators have their respective biases.

In June 2004 during the 11th United Nations Conference on Trade and Development (UNCTAD), an international, multi-stakeholder Partnership on Measuring ICT for Development was launched. The Measuring ICT website housed by UNCTAD, and the WSIS thematic meetings on different aspects of ICT indicators and measurement, are the direct results of the WSIS emphasis on indicators, and are working towards agreeing to a set of standardised ICT indicators to measure the information society that would be collected across all countries and allow for benchmarking and comparison.

As the information society gains momentum, reliable statistical data and indicators regarding ICT readiness, use and impact are increasingly and urgently needed. Reliable ICT statistical data and indicators help policy makers to formulate policies and strategies for ICT-driven economic growth, to measure their impact, and to monitor and evaluate ICT-related developments.

ICT statistical data and indicators must also be comparable at the international level, in order to allow developing countries to benchmark their information economies with those of developed countries and to take policy decisions to narrow the digital divide (UNCTAD – Measuring ICT website).

The Partnership on Measuring ICT for Development has developed a text, Core ICT Indicators (UN, 2005), which identifies indicators used to assess:

  • ICT infrastructure and access
  • Access to, and use of, ICT by households and individuals
  • Use of ICT by businesses
  • The ICT sector and trade in ICT goods

The text describes the intention of each indicator and proposes model questions for obtaining an accurate response and hence accurate data. This list of indicators does not claim to be complete and identifies the process as continuous and subject to periodic review. In the same vein, the UN Millennium Development Goal (MDG) website[6] provides a metadata section listing the methodology and data used to inform the MDG indicators.

These international agencies work with national level statistical agencies to obtain data, and in the case of the Partnership on Measuring ICT, to arrive at consensus on which indicators should be collected and the methodology for their collection. An extensive (and perhaps exhaustive) list of national statistical agencies is maintained on the Measuring the Information Society website.[7] Collecting and maintaining (updating on a regular basis) a stock of indicators is an intensive and costly undertaking for which some developing countries may not choose or be able to allocate resources. In this case, regional associations such as the Economic Commission for Latin America and the Caribbean (ECLAC) and regional development banks such as the Inter-American Development Bank and the African Development Bank can be important sources for much statistical information and analysis, as they monitor markets, economic conditions, stability, regulatory and governance conditions – many of which will intersect with the ICT terrain. Regional level research organisations such as Research ICT Africa have also been undertaking household level data collection across a number of countries.

During the mid-1990s when privatisation and liberalisation of telecom networks became pervasive around the world, independent national regulatory authorities (NRAs) were established to oversee the reforms. In order to effectively inform regulatory processes and decision-making, NRAs collect information about the sector on many different levels. Some regulators are proactive about making this information publicly available.[8] Where NRAs are under-resourced, regional regulatory associations have a role to better coordinate statistical information about the ICT sector.

Finally, there are a number of research and market intelligence groups that collect and maintain proprietary stocks of information and analysis. These usually cost more than academic or grassroots research budgets will permit. The Economist Intelligence Unit (EIU) is an exception to this and makes available their yearly report on e-readiness rankings of 65 countries.

Table 1. KEY ICT INDICATOR SOURCES
Source Website
International Telecommunication Union (ITU) <www.itu.int>
Millennium Development Goals (MDGs) Indicators <mdgs.un.org>
Organisation for Economic Co-operation and Development (OECD) <www.oecd.org>
Research ICT Africa! (RIA!) <www.researchictafrica.net>
UNCTAD: Measuring the information society <measuring-ict.unctad.org>
United Nations Development Programme (UNDP): Human Development Report <hdr.undp.org>
World Bank (WB): Information & Communications for Development (IC4D) - Global Trends and Policies <www.worldbank.org>
World Bank (WB): World Development Indicators <www.worldbank.org>

 

Table 2. PREDOMINANT ICT INDICATOR INDICES
Index Source
Digital Access Index (DAI) International Telecommunication Union (ITU)
Digital Opportunity Index (DOI) International Telecommunication Union (ITU)
E-Readiness Index Economist Intelligence Unit (EIU).
E-Readiness Index United Nations Division for Public Administration and Development Management (UNPAN)
ICT Index World Bank
Index of ICT Diffusion United Nations Conference on Trade and Development (UNCTAD)
Index of Knowledge Societies (IKS) World Bank (WB)
Infostates Orbicom
Knowledge Economy Index (KEI) World Bank Institute
Networked Readiness Index (NRI) World Economic Forum
Technology Achievement Index (TAI) United Nations Development Programme (UNDP)

 

3. What are indicators?

ICT indicators provide a snapshot summary of information about projects, countries or regions. The vantage point of the snapshot provides an indication of who is taking the picture and what is being identified as important – or not. By way of example, a security firm could develop a risk indicator for retail stores taking into account such factors as the number of entry points to the store, how many security cameras there are, timer locks on the store safe, bars on windows, and background check protocols for hiring staff. Such an indicator would purport to advise on the likelihood of the store being targeted for robbery and being successfully robbed.

The security indicator could then be used by insurance firms to assess insurance risk; by security firms to assess where they need to apply their efforts to reinforce the existing security system; and by potential thieves to pinpoint security weak points. Conversely, for example, the owner of the enterprise might also use the security indicator (perhaps without divulging its constituent statistical elements) as supportive evidence for claiming that the business is not risky to potential investors. This would be a misleading use of the indicator, as investors are looking for a different kind of security, or at least a broader definition of security. However, the indicator does not provide any kind of evidence on the likelihood of the owner using the store for laundering money, or under-reporting earnings for the purpose of tax evasion.

This kind of issue also arises in using indicators for advocacy. As will be discussed further below, indicators are not neutral and express different things. The fact that the providers of a particular set of indicators are from a different side of the fence does not mean that their data or methodology is necessarily corrupt, flawed or bad. We can assume, nonetheless, that there are different reasons for devising indicators, which may have a different focus, and thus may come at the data from a different perspective. Despite agreement on the importance of ICTs there is no sweeping consensus on approaches or conceptual models. What are the most salient aspects that will demonstrate progress? And what kinds of progress? Do we measure simply the incidence of infrastructure and technology penetration? Or do we go further to also include data to document economic progress and social progress?

Indicators are an abbreviated language or device: they point, but do not explain. So it is useful to know who is doing the pointing, as well as their motivation for pointing in the first place, and the evidence used to legitimise their authority to point convincingly. Often, we accept the authority of many indicators without delving into their methodologies. Overall, indicators must be understood as value-laden and not neutral. They provide a snapshot of progress in the context of the particular world view of their creators and contain their own inherent values.

Indicators can contribute to three main aspects of ICT policy development:

  • Needs assessment
  • Monitoring progress in different economic and social sectors
  • Providing evaluation and feedback for specific programmes and initiatives.

Indicators are essential for setting policy priorities, measuring progress towards targets, and benchmarking results. Thus, indicators can also be viewed as having a definitional function in terms of setting the parameters of the problem to be addressed. The decision of which indicators are important to collect provides evidence of what is being valued. The definition, design and measurement underlying indicators must be effected in reference to how they are intended to be used. Otherwise, indicators can be false and misleading measures. This underlines the importance of policy advocates being proactive in defining which indicators are important.

One of the most obvious examples is that it is only recently that statistics and indicators that are disaggregated by gender have been viewed as essential in mainstream practices – although it has long been known that women and girls typically do not have the same level of access to training and technology as boys and men. Without this kind of statistical information about access levels between the sexes, no real targets can be set, and realistic strategies for achieving their success cannot be devised. In addition to gender, there are also many instances of the already marginalised not being counted in statistical indicators. The excuse or claim is that they are difficult to include for a variety of reasons. Advocacy groups working at the grassroots level are particularly well situated to contribute to the stemming of this oversight where it occurs, giving the marginalised a voice – or at least a number.

Indicators can serve an advocacy function in support of demands around national level policy-making; to illustrate a basis for universal service projects; to lobby for a particular change in regulation; and so forth. There are international conventions for national level collection of data to report on a variety of socio-demographic phenomena such as population, health, educational attainment, and economic performance (among others). These data are used comparatively and across time to inform policies, target programmes, and guide investment decisions. Data about technology penetration and use are increasingly being used to form part of this picture.

Data are used in collections to form indicators. Indicators are an interpretation of the data and provide a snapshot of the assessed terrain from the perspective of what we want to show. Thus, if we consider the information society as mainly being concerned with access to technology, we will build an indicator that balances data about population, penetration of infrastructure and the cost of using it. The change in the indicator over time will provide feedback on policy performance as illustrated in Figure 1. The next section considers the practical challenges of moving along the spectrum from data collection to indicators.

4. Data collection issues

Data is raw information such as the numbers programmed in your speed dial, the last ten numbers dialled on your mobile phone, the number of times you lent your phone to a friend last week, how many people use prepaid telephony and how many have postpaid subscriptions, how much it costs to make a call, whether you have email access via your phone, your home, your local telecentre or not at all, the hours that telecentres are open for business, whether your mother has ever made a long distance call, and so forth. Which of these are interesting and useful will depend on what is being measured. Which data will actually be used is contingent on a number of factors.

4.1 Access to data

Data being out there does not necessarily mean that it is available or accessible. As shown in Figure 1, there has to be a determination of what kinds of illustration the data is intended to provide. If the policy being assessed targets women or youth, then it is clear that information from those groups will need to be pursued. As has often been the case, women, for example, have not been specifically targeted in policy, resulting in a lack of gender-disaggregated data. This means that a baseline for assessing initiatives that do now target women does not exist, making it difficult to assess progress or the success of such initiatives.

Data sources may have different reasons for withholding information. A recent survey on small and medium enterprise (SME) use of ICT (Esselaar et al, 2006) found that entrepreneurs provided inaccurate information due to concerns around taxation and competition, and also because of a lack of record-keeping.

4.2 Sample size and selection

To achieve a legitimate sample for an international level indicator you actually need a lot of data. Data collection can be an expensive proposition. By way of example, the 1990 US census cost USD 2.5 billion to undertake a 33-question census of a population of 248,718,301, which works out to USD 10.02 per person, or USD 75.5 million per question. In 2000, the 53-question census cost USD 4.5 billion at USD 15.99 per person or USD 84.9 million per question.[9] These costs do not include the time taken by individuals to self-administer the questions, and if you think in terms of a researcher administering surveys taking about 15 minutes each, it is not difficult to see the costs of achieving a representative sample, and even more so for a sample adhering to standards for international comparability.

4.3 Secondary use of data sets

While internationally comparable indicators may have their use, in many instances there may be more practical strategies for collecting information that is more complete than what already exists and is likely to be sufficiently accurate for project or policy development. An example of this is using ministry of education records, or even the local school boards, for obtaining information about ICT availability and use at the school or classroom level, rather than through the national statistical institutes.

Another important strategy is secondary use of data sets, using existing data sets for different purposes and combining data sets for reanalysis. There is a tendency to push for collection of data, with less attention being given to creative approaches to secondary analysis which can be equally revealing. For developing countries in particular this may be the fastest, best and cheapest way to shed initial light on a number of key issues. But there is also the risk of inheriting and hence perpetuating biases in the design of the collection model or other data errors.

4.4 Survey design

Data collection methodology is a large area and we will not go into detail here, but will only provide an illustration of this aspect’s complexity. For example, if you want to devise a survey to assess affordability of mobile telephony, as undertaken by LIRNEasia in their Telecom Use on a Shoestring project,[10] what kinds of evidence do you collect and what questions do you ask to ascertain this? In terms of affordability, are you concerned with the cost of services or the cost of acquiring a new handset and subsequent use? Some questions for the former include how often people use their phone to make calls (or conversely whether they only use it to receive calls); how expensive they perceive using their phone to be; and whether the cost of calls being reduced by X-percent would alter their usage of the phone. Further questions to round out the picture include inquiry into different modes of communication (fixed, mobile or public access), what the respondents felt were the benefits of access, and the respondent’s monthly communication expenditure.

Once the questions are determined, however, it is still a methodological challenge to get accurate results. Just the last question of monthly communication expenditure can be difficult to accurately remember, especially if prepaid cards are used.

4.5 Summing up…

Reliable indicators aim for transparency around data sampling and collection procedures. This transparency is achieved through clarity of definitional terms and their explication, a clear statement of methodology and methodological issues including how conflicting data are resolved, how often new data is collected, the size of the sample, and the strategy for achieving a random and representative sample. Because political motivations for collecting particular kinds of data are of paramount importance, it is useful to have clarity around who is responsible for data collection and under what conditions (e.g. of remuneration).

5. Issues around indicators

Indicators are not value free, but because they are expressed in numbers, they appear to be objective answers to what may be straightforward questions, such as, how many people have access to a telephone? The Partnership on Measuring ICT has made significant strides in some of the definitional problematics, for example, in arriving at common definitions for terms such as access and methodologies for indicator collection. However, increasingly ICT indicators (or indices) attempt to demonstrate more complex questions, such as a nation’s e-readiness or the link between ICT and growth. This section seeks to identify ways in which indicators can be misused or misinterpreted.

5.1 Harmonising definitions and indicators

How many people have access to a telephone? There are now different ways to connect to telecom networks and there are different kinds of ICT services and applications to allow people to communicate with others. Accordingly, there has been a shift from a focus on universal service – signalling aspirations for a fixed line to every home to provide affordable basic telephone service – to universal access – recognising the possibility of providing reasonably affordable access to communication services across communities by different access channels. Universal access terminology recognises that having access to a telephone does not necessarily imply ownership of either a fixed telephone or a mobile handset. However, beyond ownership there are the further categories of subscriber, user or percentage of the population within range of a signal. The definition for user varies widely from someone who has used a telephone sometime during the last year, in the last three months, in the last month, a certain number of times per given timeframe, etc. It is easy to see how users and subscribers might be inadvertently used interchangeably, thus creating inaccurate perceptions. In the same vein, the percentage of the population (or number of inhabitants) with access to a signal does not actually tell us how many are able to avail themselves of productive use of the signal.

If we consider the community access points identified in the country case studies in this report, we find that there are telecentres, kiosks, public internet access points, community technology centres, public service stations, coin-operated public phones, etc. It is difficult to compare these across countries, not because they have different names, but because the different names refer to different entities. Some are stand-alone public telephones, others are telephone resell points (and just these two examples have very different business models and service implications); others provide internet services, which may include voice over internet protocol (VoIP) telephony, others may be service centres which provide support services in addition to technology access, and so forth.

Harmonisation of terminology and methods for assessing and assigning values also needs to occur at other levels, such as tariffs (per-minute, per-second or per-pulse charges or flat rates); affordability, which involves regional differences; accessibility, in terms of distance; broadband services, for which there is some dispute between 3G and WiMax offerings; and so forth. In order to illustrate the importance of such precision around terminology, consider that lack of precision can result in claims that an operator has fulfilled universal access requirements by installing a single payphone in a village in a context in which providing universal access fulfils licence conditions for exclusivity of service provision.

5.2 Indicators from supply and demand perspectives

Not surprisingly, there will often be a divergence between what operators want to demonstrate (supply of services) and advocacy needs that are made evident based on how ICTs and their applications and services are used and made available across different socioeconomic sectors of society. Clearly, supply- and demand-side concerns are two sides of the same coin.

Supply-side indicators depict the ICT terrain from the service providers’ perspective: how much of the terrain is serviced by a signal, how many fixed lines are available, how big the market is (for different kinds of services), the conditions of offer (pricing). This kind of data is captured in information that is required for reporting to regulators and government authorities (such as for taxation and business practices). In addition to the picture of the market that this information presents, a key question is: Who has access to this information? In many cases, operators retain such information for solely internal use; and in some cases, regulators obtain operators’ indicators but do not make them further available.

Demand-side indicators look to evidence about how services are consumed: by whom (e.g. which members of the family), where services are accessed, whether users would like to use services more than they do – and why they can’t do this (because the call centre is only open when they are at work, because it costs too much, because they do not know how to use particular service components, and so forth).

5.3 Qualitative vs. quantitative assessments

There are different ways of collecting and presenting information about the ICT sector, as illustrated in the previous section. With a view to international comparability and documenting progress by periodic sampling, there is a logic to using numbers. A quantitative survey or assessment counts things: how many phone lines exist, how many homes and schools have computers, etc. However, as shown in terms of different examples of indicator criteria (Boxes 1 and 2), measuring the “digital divide” is complicated by qualitative factors: aspects that are not easily counted, but which have a bearing on how effectively ICTs are deployed.

An over-reliance on quantitative analysis will fail to capture the quality of experience. For example, the introduction of computers into schools may produce impressive statistics, but a qualitative analysis will identify how well they are being used and what direction skill-development initiatives should take. Interviews and case studies can be used to collect this kind of qualitative information. The statistical presence of ICT infrastructure does not guarantee access to the full range of potential users. By way of another example, a teledensity indicator does not show how telephones are used. The typically low teledensity rates for developing countries must be understood in terms of the practice of shared use of such technologies – which is very much less the case for developed economies, and not made explicit in the simple indicator.

Box 1. BRIDGES’ REAL ACCESS/REAL IMPACT CRITERIA
    (1) Physical access to technology
    Is technology available and physically accessible?
    (2) Appropriateness of technology
    What is the appropriate technology according to local conditions, and how do people need and want to put technology to use?
    (3) Affordability of technology and technology use
    Is technology access affordable for people to use?
    (4) Human capacity and training
    Do people understand how to use technology and its potential uses?
    (5) Locally relevant content, applications, and services
    Is there locally relevant content, especially in terms of language?
    (6) Integration into daily routines
    Does the technology further burden people's lives or does it integrate into daily routines?
    (7) Socio-cultural factors
    Are people limited in their use of technology based on gender, race, or other socio-cultural factors?
    (8) Trust in technology
    Do people have confidence in and understand the implications of the technology they use, for instance in terms of privacy, security, or cybercrime?
    (9) Local economic environment
    Is there a local economy that can and will sustain technology use?
    (10) Macro-economic environment
    Is national economic policy conducive to widespread technology use, for example, in terms of transparency, deregulation, investment, and labour issues?
    (11) Legal and regulatory framework
    How do laws and regulations affect technology use and what changes are needed to create an environment that fosters its use?
    (12) Political will and public support
    Is there the necessary political will in government to enable integration of technology throughout society?
Source: bridges.org (<www.bridges.org>)

 

Box 2. ORBICOM'S ASSESMENT INDICATORS
Infodensity
Networks
  • Main telephone lines per 100 inhabitants
  • Waiting lines/mainlines
  • Digital lines/mainlines
  • Cell phones per 100 inhabitants
  • Cable TV subscribership per 100 households
  • Internet hosts per 1,000 inhabitants
  • Secure servers/Internet hosts
  • International bandwidth (Kbs per inhabitant)
Skills
  • Adult literacy rates
  • Gross enrolment ratios
  • Primary education
  • Secondary education
  • Tertiary education
Infouse
Uptake
  • TV equipped households per 100 households
  • Residential phone lines per 100 households
  • PCs per 100 inhabitants
  • Internet users per 100 inhabitants
Intensity
  • Broadband users/Internet users
  • International outgoing telephone traffic minutes per capita
  • International incoming telephone traffic minutes per capita
Source: Orbicom <www.orbicom.ca>

 

5.4 One dollar a day and $100 laptops

By definition, indicators convey complex information in a more concise format. Although more useful in some senses, reductive presentation of complex realities may provide an image that rather than illuminating a situation actually conceals it. By way of example, for those working in the area of telecommunication, teledensity (the number of telephones per 100 people) has historically been a standard measure identifying a given level of telecom infrastructure development. It is acknowledged that a country's teledensity denotes an average across rural and urban areas, and that there may also be socioeconomic constraints on use or roll-out of infrastructure in certain areas.

However, ICT indicators are becoming popularised and increasingly used by a wider set of actors from different backgrounds. Additionally, as ICTs have occupied an increasingly important space in society and the economy, they are much more reported in the popular media, which further simplifies presentation of indicators. An example of this is the almost sloganistic reporting that there are more phones in Manhattan than all of Africa. While this has limited use as an indicator beyond a very basic level of consciousness raising, it nonetheless paints an evocative picture that people can use to grasp the enormity of the “digital divide”.[11] That this quasi indicator has not been true for a long time is pretty much irrelevant to its continued use.[12] In the same vein, in the early 1990s, the number of times an encyclopaedia could circle the earth in a minute provided a visual image of the speed of computers that people who were not familiar with computers could relate to. Thus, ICT researchers, regulators and telecom service providers are clear on how teledensity is used. But new users of the terminology and indicator may not know to connect the indicator with its underlying nuances and components – opening the door to misinterpretation, misleading uses or fundamental misconceptions. 

Another example of this is the international poverty indicator to identify the number of people in the world living in extreme poverty. This is the one dollar a day poverty line. Target 1 of the MDGs is to “Reduce by half the proportion of people living on less than a dollar a day.” This is a very strong and evocative image. Few people reading this publication could subsist on one dollar per day.

But what does it mean to live on less than one dollar per day? In simply asking this question it quickly becomes apparent that the image is paramount but that the indicator has little to do with any kind of purchasing power for people subsisting at this level (and perhaps even little to do with an accurate assessment of real extreme poverty levels). There are many different ways of measuring poverty and creating indicators to assess poverty and progress on its alleviation. Beyond a vague economic framing, the concept of one dollar per day provides very little actual information about the different conditions of poverty.

The $100 laptop is a similar catch-phrase phenomenon – positing an economic and technical solution for the inability to provide education to the world's poorest children. The terminology “digital divide” also posits a digital solution to divides that are entrenched in historical socioeconomic exclusion and inequalities.[13] Complex issues are framed only in economic and technical terminology. For ICT indicators, this issue also arises with the use of concepts such as e-readiness and access to embody a range of meanings across technical infrastructure, social factors such as language and content, and personal training and capacity attributes.

5.5 Different priorities, influences and results

Over the past decade and a half, there has been an increasing proliferation of studies documenting the fact that ICTs are fundamental to our economies and societies. And there has also been a growth in indicator indices to assess and encapsulate different aspects of sector growth, ICT diffusion, links between ICTs and productivity, the economy, educational attainment, and so forth. In short, there are a range of different reasons for wanting to measure ICT. The Sibis report (Technopolis, 2003) discusses the traditional approach of ICT measurement across three fundamental views of access, use and impact, with access being the easiest area to objectively document and historically the predominant focus of ICT indicators.

Table 2 lists ICT indicators indices, which assess and rank countries on various aspects of ICT diffusion and absorption. While at a glance they all appear to be concerned with a similar and common outlook on a similar area of inquiry, they actually have a range of different foci based on which element of access, use or impact is most strongly stressed. These are generally the overarching categories for assessment, although each major ICT indicator index uses varying terminology indicating the particular spin on their signature ICT indicator index. For example:

  • Digital Opportunity Index: opportunity, infrastructure, and utilization.
  • Orbicom Infostate Index: infodensity (the sum of all ICT stocks), and info-use (consumption flows of ICTs/period), with infostate being the aggregation of infodensity and info-use.
  • Economist Intelligence Unit E-Readiness Index: connectivity and infrastructure; business environment; consumer and business adoption; legal and policy environment; social and cultural environment; and supporting e-services.
  • Network Readiness Index (World Economic Forum): environment, readiness, and usage.
  • Index of ICT Diffusion (UNCTAD): connectivity, access and policy.

A study on the gender “digital divide” in Francophone Africa, A Harsh Reality, asserts that components for a gender “digital divide” indicator should comprise: control, content relevance, capacities and connectivity (Mottin-Sylla 2005, p. 34). A vantage point neglected in the design of most ICT indicator and statistical collections is on gender differences in terms of access, use and impact. Use and impact issues are often premised upon access indicators, and this is problematic. Countries demonstrating increased infrastructure access may be occluding who is allowed to use the technology at a community or household level. While a gender-sensitive ICT indicator will collect information on access, use and impact in a disaggregated gender format, the gender “digital divide” indicator devised for the Francophone Africa study is prescriptive, providing information with the intention of targeting women’s additional unequal conditions for correction. The Real Access/Real Impact criteria developed by Bridges[14] and Orbicom’s assessment categories (Boxes 1 and 2) further illustrate frameworks extending beyond access to infrastructure.

Graph 1 shows the lack of consistency across the different indices. The country results for different indices are shown here as a percentage of their ranking at the Latin American and Caribbean level. Thus, if the findings were similar across the indices, there would be incidence of parallel lines as there is for Argentina, Brazil and Colombia for the UNPAN, WBICT and KEI indices – as shown at the top left corner of the figure. This, however, is the only point of parallel findings – with widely divergent results. Kauffman and Kumar (2005) attribute this to the fact that there are three overarching perspectives for single item composite ICT indices, such as shown here. These are ICT readiness, ICT intensity, and indices attempting to measure impacts of ICTs. Minges’ (2005) work further illustrates the trade-offs or different strategies of assessments. This is shown particularly well by Table 1, a table he uses to depict the different choices for ICT infrastructure within indices.

Number of indicators related to infrastructure 3 6 10 8 2 3 4 12 4 6 5 11
Number included in infrastructure category 3 2 5 8 2 3 3 8 4 4 5 5
Internet penetration X O O X X X   O   O X X
Mobile penetration   X X X     O X   X X X
Fixed penetration   X   X     X X   X X X
PCs per capita       X   X   O   X X X
Total telephone penetration X       X X     X      
Internet host penetration             X X X X    
Internet affordability   O O X               O
Secure internet servers       X     X         O
International internet bandwidth per inhabitant   O           X       O
Broadband penetration   O O X               O
Electricity consumption X               X      
Proportion of households with fixed line     X         O        
Proportion of households with a TV               O       X
Mobile tariffs     O                 O
Proportion of households with internet     X                  
Mobile internet subscribers     X                  
Proportion of households with a PC     X                  
Waiting lines/main lines               X        
Digital lines/mainlines               X        
Cable TV penetration               X        
Secure servers/internet hosts               X        
Technology exports                 X      
TVs per capita                     X  
Hotspot (WiFi) penetration       X                
Local call charge                   O    
Fixed tariffs                       O
Mobile population coverage     O                  
Source: Minges (2005)
Note: “X” means the indicator is found in an infrastructure category whereas “O” means that the indicator is included in the index but located in another category.

Small differences in choices of indicators can result in dramatically different rankings across countries. One example highlighted is the different results achieved for two indexes measuring countries' technical capabilities. The UNDP's Technology Achievement Index (TAI) counted Internet hosts, whereas the Arhibugi and Coco (ArCo) assessment counted Internet users. Minges (2005, p. 22) comments: “Because a host can be located anywhere, it is not really a good measure of the intensity of internet usage in a country.” In the same vein, Goswami (2006) argues that the Networked Readiness Index (NRI) has too many components:

[S]tate of cluster development, number of utility patents, subsidies for R&D, administrative burden, efficiency of tax system, overall infrastructure quality, extent of staff training are factors common to a number of industries and have little connection with ICT environment, readiness or usage per se. However, they have the same weight as other more directly related ICT indicators.

Indicators should be explicit with regards to their respective methodologies. It is often the case that methodological statements remain unread; indeed, many users of indicators lack the necessary background in quantitative methods necessary to understand the complex statistics or do not have the time to consider the raw data. Nonetheless, complex calculations (by experts!) bundled into a single index number that is offered at face value is not best practice and does not leave open the opportunity for subsequent analysis and scrutiny. The security indicator example above illustrates how indicators can be used out of context to misrepresent a given situation. The same can be done simply by not clarifying the methodology behind the indicator. As shown in the examples around data collection, there are different ways for collected data to be biased or inaccurate. The same can also be true for how the data is subsequently treated to form the basis for an indicator.

Transparency questions are not all pernicious. Some are simply questions of avoiding misinterpretation or imprecision because of lack of clarity around methods. Graph 2 provides an example of this. The Knowledge Economy Index offers the overall indicator in absolute terms or as adjusted for population. As can be seen in the figure, this results in a significant difference for Latin American economies with large populations such as Brazil and Mexico, where there are likely to be larger gaps between different socio-economic sectors and between rural and urban inhabitants.

5.6 Gender

Despite repeated calls for inclusion of gendered indicators and statistical information disaggregated by gender, there is still lack of progress in this regard. Huyer et al (2003) discuss a number of important points around why ICT indicators disaggregated by gender are so important. The first goes to the issue of women being instrumental in the poverty reduction targeted by the MDGs. Secondly, “ICTs are expected to play a catalytic role as well” (Huyer et al, 2003). With studies showing that for the financially constrained there is a generalised positive social impact of women’s access to ICT – particularly in terms of family health, but also in terms of employment – it is imperative first to mobilise advocacy around inclusion, and subsequently to monitor womens’ and girls’ participation in the information society. This is of course difficult to undertake if gender-disaggregated statistical information is not made more routinely available.

 

Although it is often pointed out that the “digital divide” is a manifestation of other already existing (and entrenched) divides, Huyer et al (2003, p. 145) provide evidence that the “relationship between the gender divide and the overall digital divide is very tenuous and does not support the argument that the two move in tandem.” Thus, work to reduce a “digital divide” will not necessarily extend benefits to women and girls – unless the programme is specifically targeted and implemented with the intention of addressing their particular needs within particular socioeconomic contexts.

Until 2003, the only sex-disaggregated ICT data collected by the ITU was the percentage of female employees in telecom administrations, and since 2003, it has added only two new sex-disaggregated indicators: female internet users as a percentage of total users, and female internet users as a percentage of females (Halfkin, 2006, pp. 52-53). Internet use indicators are important, but for developing country contexts, access to mobile telephony is also a very important indicator, as mobile telephony is rapidly becoming the predominant means for universal access. The Research ICT Africa household surveys[15] specifically addressed mobile access by women and men – one of the first large-scale ICT index studies to do so.

5.7 Summing up…

We rely on indicators to inform advocacy processes and to assess the progress of ICT in terms of contributing to social goals. Because of some of their inherent biases, strategic use of indicators means being cognisant of these biases, and further, explicit in our own proactive biases around inclusion and empowerment. This means that demand-side indicators are especially important to inform analysis across different social classes and marginalised sectors of the population. Qualitative approaches in particular can further inform quantitative assessments. Household surveys and affordability studies are examples of such contributions. The project to fill in the gaps of questions that are not asked, sectors of the population who are not surveyed, and correcting or adding to indicator methodologies, is not a project that should happen on the sidelines of mainstream indicator communities.

Further, it may be useful to focus more on demand-side information to better ascertain technological adoption and productive integration into different societal sectors.

[T]he shortening of technology product life cycles makes any tracking measurement problematic. The problem is compounded by the fact that user definitions and perceptions of technology vary across countries. Therefore, over the medium and long term, measuring experience, measuring consumers’ satisfaction levels, insulates indicators from changing technology and its varying nomenclature (Technopolis, 2003, p. 15).

Because of the multiple paths to connectivity that now exist, with new paths emerging, what will be most important to document is the quality of access and subsequent impact on quality of life and creation and opening up of opportunities, necessitating a more qualitative approach to devising indicators and more nuanced understanding of impacts.

6. Indicators for advocacy – emerging frameworks

How we count things to assess our progress towards universal access to ICT will continue to be challenging, especially for the future. As noted in the Introduction, we are no longer only counting the number of business and residential subscriptions to a monopoly service to arrive at a snapshot of the sector. There are different kinds of users and subscribers, and there are multiple access channels to a wide and always increasing array of applications and services. Further, we need to know much more about this dynamic terrain than mere information about access to technology. And, as illustrated in the previous sections, there are different perspectives and interests involved in how ICT markets, use, adoption, etc., are depicted. This concluding section focuses on ways that civil society can mobilise indicators in service of its own advocacy agenda and also to measure progress towards achieving this agenda.

The first way to contribute to the design of appropriate indicators is to participate in mainstream processes, such as the Partnership on Measuring ICT for Development, emerging from the WSIS events. These are extremely important venues for voicing alternative perspectives and agendas. The participation of civil society in international forums is increasingly necessary for the processes to be viewed as legitimate.

Another good way to achieve an intrinsic understanding of indicators is to use them. As with most good practices, it is useful to begin at home. Implementing proper evaluation practices for projects and programmes requires the same steps used for indicator design, which are to identify 1) what needs to be known or made explicit; 2) where that information resides; 3) a strategy for sampling the data or collecting information; 4) establishing parameters for ongoing monitoring; and 5) a presentation method to effectively depict the needed information. Much work has already been undertaken to help users develop and apply evaluation practices that rely on developing evaluation type indicators for advocacy activities. Resources such as the Gender Evaluation Methodology (GEM)[16] set out to explain and demystify processes around how to collect data and use it effectively. There are numerous guides on project evaluation, but because of the lack of significant stocks of information from a gendered perspective, it is perhaps useful as a general rule to begin with GEM and only deviate from this if a clear case is made that a different approach is more effective. Through establishing agendas in our own practices, new norms are created for quality of data stocks and indicators.

To achieve clarity about our own use of data and indicators, agreement on definitions and priorities must occur across the organisation and/or network. Initiatives such as this publication require that priorities for evaluation are agreed upon. Evidence allocated to these categories across different case study countries provides an opportunity to work towards standardisation of findings and resources, and to agree upon acceptable sources of indicators.

Drafting strategic documents – such as the Association for Progressive Communications (APC) Internet Rights Charter, or the APC Recommendations to the WSIS on Internet Governance – require a vision of how to measure progress. For the latter document, one of five areas of concern is dedicated to the brief to ensure that internet access is “universal and affordable” (APC, 2005 and 2006) We need indicators to illustrate where to exert efforts and pressure, and a way of measuring progress towards these goals. Asserting aspirations of affordable and universal internet access implies that there are definitions of “affordable” and “universal” in order to assess progress towards achieving these goals. Affordability in itself is a highly relative term, as illustrated by Milne’s (2006) Affordability Toolkit. Affordability is contingent on willingness and ability to pay for services, access to currency, definitions of poverty and baskets of goods to assess disposable income, and income, among other factors. Universal merely means ubiquitous, but as discussed above, ubiquitous access to a signal is a very different concept than meaningful integration of new ICT services and applications into everyday lives. Indeed, as we write our vision statements, we must simultaneously be devising a vision of evidence that will be marshalled for advocacy and to celebrate successes.

At the end of the day, it may simply be important to know how many people have access to a telephone. This is an important question – and even more so if we take the time to unpack it.

References

APC (Association for Progressive Communications) (2005). APC's Recommendations to the WSIS on Internet Governance, November 2005 [online]. Available form: <www.itu.int>.

APC (2006). APC's Internet Rights Charter [online]. Available from: <rights.apc.org>.

Arndt, C., Oman, C. (2006). Uses and Abuses of Governance Indicators. (n.p.): OECD.

Bender, G. (2006). Peculiarities and Relevance of Non-Research-Intensive Industries in the Knowledge-Based Economy [online]. Available from: <www.pilot-project.org>.

Esselaar, S., Stork, C., Ndiwalana, A., Deen-Swarray, M. (2006). ICT usage and its impact on profitability of SMEs in 13 African Countries: International Conference on Information and Communication Technologies and Development (ICTD2006). 25-26 May 2006, Berkeley.

George, S. (2004). Another world is possible if… . London/New York: Verso.

Gillwald, A. (2005). Towards an African e-Index: Household and Individual ICT Access and Usage across to African Countries [online]. Available from: <www.researchictafrica.net>.

Girardi, R., Sajeva, M. (2004). EU New Economy Policy Indicators Quality Management Report 2.0 [online]. Available from: <farmweb.jrc.cec.eu.int>.

Goswami, D. (2006). A Review of the Network Readiness Index. World Dialogue on Regulation for Network Economies (WDR) [online]. Available from: <www.regulateonline.org>.

Gual, J., Trillas, F. (2006). “Telecommunications Policies: Measurement and Determinants”. Review of Network Economics [online], 5, (2), pp 249-272. Available from: <www.rnejournal.com>.

Huyer, S., Hafkin, N., Ertl, H., and H. Dryburgh (2003). “Women in the information society”. In Sciadas, G. (Ed). From the Digital Divide to Digital Opportunities: Measuring Infostates for Development, pp. 135-196. Québec: Claude-Yves Charron. Available from: <www.itu.int>.

ITU (International Telecommunication Union) (2005). Measuring ICT: The Global Status of ICT Indicators [online]. Available from: <www.itu.int>.

ITU (2006a). World Information Society Report 2006 [online]. Geneva: ITU. Available from: <www.ifap.ru>.

ITU (2006b). World Telecommunication/ICT Development Report 2006: Measuring ICT for Social and Economic Development [online]. Geneva: ITU. Available from: <www.itu.int>.

Kauffman, R.J., Kumar, A. (2005). A Crtical Assessment of the Capabilities of Five Measures for ICT Development [online]. Available from: <misrc.umn.edu>.

Lee, R. W., Wack, P., Jud, E. (2003). “The Development of Indicators of Sustainability”, Toward Sustainable Transportation Indicators for California [online], pp. 17-29. San José: Mineta Transportation Institute. Available from: <transweb.sjsu.edu>.

Milne, C. (2006). Telecoms demand: measures for improving affordability in developing countries. A toolkit for action. Main Report [online]. (n.p.): World Dialogue on Regulation for Network Economies. Available from:
<www.regulateonline.org>.

Minges, M. (2005). Evaluation of e-Readiness Indices in Latin America and the Caribbean [online]. Santiago: ECLAC. Available from: <www.eclac.org>.

Mottin-Sylla, M.H. (2005). The Gender Digital Divide in Francophone Africa: A Harsh Reality [online]. (n.p.): ENDA. Available from: <www.genderit.org>

NTIA (National Telecommunications and Information Administration) (1999). Falling Through the Net: Defining the digital divide. Washington: NTIA.

OECD (Organisation for Economic Co-Operation and Development) (1998). Human Capital Investment: An International Comparison [online]. (n.p.): OECD. Available from:
<www.mszs.si>.

OECD (2005). “Guide to Measuring the Information Society”. Working Party on Indicators for the Information Society [online]. (n.p.): OECD. Available from: <www.oecd.org>.

Pogge, T., Reddy, S.G. (2006). Unknown: The Extent, Distribution and Trend of Global Income Poverty [online]. Available from: <ssrn.com>.

Sajeva, M. (2005). A methodology for Quality Assurance of Knowledge Economy Statistical Indicators. The communication of risks and uncertainties for a continuous improvement [online]. Available from:
<farmweb.jrc.cec.eu.int>.

Sciadas, G. (2003). From the Digital Divide to Digital Opportunities: Measuring Infostates for Development [online]. Qu bec: Claude-Yves Charron. Available from: <www.itu.int>.

Technopolis (2003). Benchmarking Telecommunication and Access in the Information Society [online]. Available from:
<www.eurosfaire.prd.fr>.

Torero, M., von Braun, J. (2005). ICTs. Information and Communication Technologies for the Poor [online]. Available from: <www.ifpri.org>.

UN (United Nations) (2005a). Global E-Government Readiness Report 2005 [online]. New York, USA: UN. Available from:
<unpan1.un.org>.

UNCTAD (UN Conference on Trade and Development) (2006). The Digital Divide Report: ICT Diffusion Index 2005. New York/Geneva: United Nations. Available from:
<www.unctad.org>.

UNDP/UNIFEM (United Nations Development Programme/United Nations Development Fund for Women) (2004). Bridging the Gender Digital Divide. A Report on Gender in ICT in Central and Eastern Europe and the Commonwealth of Independent States [online]. Bratislava: UNDP/UNIFEM. Available from: <web.undp.sk>.

WB (World Bank) (2005). Financing Information and Communication Infrastructure Needs in the Developing World: Public and Private Roles [online]. Available from: <event-africa-networking.web.cern.ch>.