Poland

Authored by

Algorithms deciding the future of social rights: Some early lessons from Poland

Introduction

Governments all around the world are looking toward artificial intelligence (AI) and other automated systems as an attractive solution for many complicated social problems. One of the significant promises of these new technological developments is the increase of efficiency and cost-effectiveness of public administration. Automated systems and algorithms are already being used to determine health insurance, check eligibility for welfare benefits or detect potential fraud.[1] These technologies become a vital element in procedures that have a significant effect on the enjoyment of social rights. At the same time, they also create many problems for transparency and accountability and can amplify existing inequalities.

This report will try to illustrate some of these problems by analysing already existing automated systems and algorithms used by the Polish welfare administration. While these systems are not very sophisticated, they can provide some early lessons for data-intensive practices and their impact on people's rights and needs. Because of their technological and mathematical nature, the systems are very often portrayed as objective and apolitical, while they in fact have significant social justice consequences. What we learn from them can be crucial for the discussion about more advanced technologies, including AI, and their human rights implications.

Background: E-government imperatives, complex regulation and austerity

In 2018 the Ministry of Digital Affairs published a document that serves as a base for a future national AI strategy in Poland.[2] The report focuses on investments in research, cooperation between business and universities and potentially AI companies, and, most notably, public administration. While the Ministry also included some ethical and human rights concerns (mostly concerning privacy and non-discrimination), the primary narrative stressed AI's impact on organisational efficiency, innovation and economic growth. In comparison to other countries, the Polish approach is rather typical and sees AI as a strategic technology for state and business operations.[3]

However, the hype around AI is not only driven by government plans and policies. Big international corporations – like Microsoft, IBM or SAS[4] – are already offering sophisticated machine learning and analytical tools that can be used by public agencies, including welfare administration.[5] Among those innovations are technologies designed to facilitate client services (e.g. verification of eligibility for a service or services tailored to the individual needs of citizens) and for planning and policy-related purposes (such as doing cost-effective analyses).[6] This is, of course, not a new phenomenon. As part of the e‑government agenda, many similar systems have been around for decades.[7] Over the years, Poland has been investing a lot in the digitalisation of public services, mostly thanks to funds from the European Union (EU).[8]

These technologies fall under a complicated and fragmented regulatory regime. From the perspective of citizens' rights, the most significant laws are the administrative law, the data protection law (the EU's General Data Protection Regulation and its national implementation) and numerous laws and regulations related to social benefits, health care and other public services. The mix of rules creates an uncertain and complex framework that sets how public administration makes decisions about services, its use of digital systems in this process, how citizens may apply for benefits, and what kind of safeguards and oversight systems are in place.

Given these uncertainties and complexities, the discussion around AI shows that at least at the EU level, there is a growing consensus for the separate regulation of automated systems that will address at least the problem of liability and the opacity of such systems.[9]

Beside these legal problems, there is also a growing international discussion on how such computerised systems can amplify existing inequalities and create concerns for social justice. Some researchers prove that the deployment of such systems in welfare can create harms for people experiencing poverty, and other vulnerable populations.[10] Very often these technologies are justified by austerity policies and cost-reduction strategies, and therefore directly affect the enjoyment of certain social rights.

Early lessons from the datafication and algorithmisation of welfare

Healthcare insurance verification: Errors and limitations to safeguards
In 2018 the Polish press reported on the case of a migrant woman of Romani origin who was denied medical service for her ill daughter.[11] The incident, which quickly escalated into conflict when the police intervened, was caused by anti-Romani sentiment and the discriminatory attitude of medical personnel. However, the direct reason for denying the service was an error in an automated system which checks eligibility for health care insurance. The Polish health care system is based on a public insurance scheme; however, a significant segment of the population (around two million people) is excluded from it. This group includes mostly undeclared workers, freelancers, migrants and the homeless.[12]

Introduced in 2013, e-WUŚ (elektroniczny system weryfikacji uprawnień świadczeniobiorców in Polish) is used before each visit to the doctor or hospital.[13] The system uses data shared between the National Health Fund and the Polish Social Insurance Institution to verify insurance status automatically. Due to inaccurate, erroneous or outdated data, e‑WUŚ has in the past made thousands of mistakes, creating chaos in Polish health care.[14] To reduce the scale of this problem, the government introduced some special safeguards and procedures.[15] If the system indicates that a person does not have insurance, she can still visit a doctor but under certain conditions. She has to write a statement expressing disagreement with the system's verification result, and in two weeks provide the necessary evidence indicating that she should be covered (usually pay slips).

While this procedure works well in most cases, it can create some problems for vulnerable populations like migrants or the homeless. To verify insurance statuses, e‑WUŚ uses a national ID number. Some migrants do not have this, even if their status is fully regulated. Similar problems face homeless people who lack either official addresses or documentation.[16] These situations create bureaucratic restrictions to the enjoyment of the universal right to health. Some also argue that developing and maintaining the system itself costs more than just expanding health insurance to the whole population.[17]

Profiling the unemployed: Assumptions of practice and a successful human rights intervention
Between 2014 and 2019, Polish job centres were using an automated decision-making system to categorise unemployed people and allocate them different types of assistance automatically.[18] The so-called profiling mechanism used demographic information and data collected during a computer-based interview conducted by frontline staff. Each data entry was assigned a score from 0 to 8. Based on the final calculation, the algorithm decided on the profile of the individual, and, as a result, determined the scope of assistance a person can apply for. The system divided all unemployed into three profiles, which differed from each other in terms of demographics, distance from the labour market and the chance for re-employment. The main reasons for this technology were to reduce costs, increase efficiency and offer greater individualisation of services.

However, the profiling mechanism caused numerous controversies. For example, citizens applying for assistance did not know how the system worked: what the scope of input data was (e.g. what kind of demographic data was taken into account), what answers to the interview questions were chosen by the frontline staff, how information was processed and how scores were calculated. At the same time, the outcomes of the algorithmic calculations could be severe. In many cases, the profile assigned to the person limited her access to specific types of assistance. This was mostly visible for the so-called “third-profiled” who could only apply for a very limited set of assistance. There were also accusations of discrimination against some groups (single mothers, people with disabilities or rural residents). Many unemployed persons articulated their dissatisfaction with the profiling mechanism and even tried to game the system or submit formal complaints.[19] In addition, according to an official evaluation of the system, staff at the job centres were also unhappy with the categorising tool: 44% of them said that it was useless in helping them with their everyday work.[20] In practice, they used the system in different ways. For many, the computer was an ultimate decision maker. For others, the profiling was just a part of the broader individual assessment process of an unemployed person. Sometimes they even adjusted the profile to meet the expectations of the unemployed person.[21] These examples show that in practice, the use of automated technologies depends on organisational culture, competencies and individual preferences. Designers' intentions may be significantly different from the actual use of technology. The level of automation results not only from initial assumptions but, above all, from the practice of users.

The profiling mechanism was also heavily criticised by civil society and human rights institutions (the Personal Data Protection Office and Human Rights Commissioner). The Panoptykon Foundation, the country's leading digital rights organisation, launched a long and successful campaign against the system.[22] Raising concerns about transparency, anti-discrimination and privacy, activists convinced the Polish Human Rights Commissioner to refer the profiling case to Poland's Constitutional Court. In 2018, the Court ruled that the bill which introduced a profiling mechanism violated the Polish constitution, and as a consequence, the government decided to stop using the system a year later.[23]

Detecting welfare fraud: The depoliticisation and opacity of the digital system
Another example of a system used by the Polish welfare administration is a complicated mix of different databases with automated functions that help to detect fraud among welfare recipients. At the core of this mechanism is a system called the Central Base of Beneficiaries (CBB), which contains millions of data records of people receiving welfare benefits.[24] During the application process, the system allows the official to check if a person is receiving the same or a similar benefit in another commune. Social workers may also run a query within other databases to verify the applicant's employment history, taxation, etc. This automated data analysis can indicate that a person is not eligible for certain benefits or be a sign of a potential fraud.[25]

The CBB was introduced as a part of a larger project of digitalisation of the welfare sector called "Empatia" (or Empathy in English).[26]

While the introduction of automated systems in welfare is necessary, especially when big social programmes are in place, the initiative created significant problems from the perspective of transparency. The mechanism of detecting fraud was designed and expanded without any social discussion and sometimes without clear legal bases. Data-sharing agreements between public agencies were seen as a technical issue, and not a political problem that can determine individuals' social rights. Additionally, while there was no information about errors or harms caused by the system, many social workers said they were frustrated and explained that the system created problems in their daily operations.[27] The use of it is time consuming and the procedures are not always easy to follow. This raises another question: To what extent does the introduction of such a system create greater efficiency in the work of social workers, allowing them to focus on their actual job, which is helping people in need?

Algorithm for allocating resources: Low-quality data and the need for expert advocacy
The Polish welfare administration is also using non-automated models and algorithms for the allocation of crucial resources. One such example is the mathematical formula used by the State Fund for Rehabilitation of Disabled People (SFRDP).[28] It allows the distribution of vital financial resources for rehabilitation and assistance for people with disabilities. The algorithm regulates the allocation of resources to local governments, and it is described in detail in law. It uses such data as the number of people with disabilities, the number of children with disabilities and unemployment statistics. While the mechanism is supposed to be objective and technical, it creates a lot of controversies and has been contested.

One of the biggest problems is the quality and accuracy of data used in the allocation procedure. The sources of that information are the national census from 2011 and administrative databases. According to organisations that fight for the rights of people with disabilities and representatives of local governments, this data is misrepresenting (due to methodological problems) the population with disabilities.[29] Because of this misrepresentation, some local governments received inadequate funds and as a consequence, many people with disabilities were left without necessary assistance. This problem primarily affected the education and support for children with disabilities. In 2018, civil society organisations were able to successfully advocate for some changes to the algorithm, resulting in more significant resources being allocated for specific types of support.[30] However, some of the most pressing problems (e.g. the methodology of the census) remain unresolved. While this case is not an example of automation, it shows that low-quality data and use of models can cause problems for crucial decision making about social rights. It also demonstrates that for civil society organisations to successfully advocate for their interests, they must engage in the technical language of algorithms and mathematical formulas.

Conclusion

Existing examples of automated systems and mathematical models used in welfare can provide some valuable lessons for implementing and problematising more advanced technologies like AI. One of them is related to the political nature of digital technologies and algorithms. While very often portrayed as objective and technocratic and as a result the sole realm of technical experts, the systems (their design, architecture and targets) are in fact deeply political in that they create social constructs and play a crucial role in a decision-making process that affects thousands of individuals in the allocation of resources. The use and design of automated systems is a result of individual choices about policies, priorities and cultural norms. Therefore their deployment and implementation should be subject to democratic control. However, as was shown in some of the Polish examples, this is not always an easy task.

Understanding the impact of these systems is difficult due to the complexity of the technological layers that demands some specific expertise. In such situations, activists and human rights institutions need to learn new skills and engage with a different language, concepts and communities. For example, in the case of the algorithm for allocating resources, activists had to propose specific changes to the law using the complicated mathematical formulas in the law. Therefore there is a need to look for new ways of articulating social justice vis-à-vis automated systems and algorithms. Privacy and data protection remain central frames in this context. However, they have limitations – they focus on quite narrowly understood informational harms, and very often ignore collective injustices created by computer systems. The application of a social rights lens can extend the discussion around automated systems and create some necessary connections between social status, discrimination, inequity and the use of technology and its outcome. A social rights framework involves procedural elements (participation in creating policies or transparency in the individual decision-making process) and substantive considerations (access to some specific set of social services). Thanks to this framework, it is easier to position AI as a political and social justice issue.

There is also a space for more radical political advocacy that would not only engage in changes or improvements to algorithms, but also call for the abolition of specific systems that cause harm. The campaign against the mechanism for profiling the unemployed was a great example of when human rights, social justice and the rule of law helped to determine which processes could be automated and which should not, and under what conditions.[31]

It is also important to acknowledge that many of those technologies function in very complex organisational and institutional environments. The use of them depends on different organisational cultures, individual motivation, conflicts between institutions and more. As indicated in the profiling case, frontline staff can, for example, use systems in a different way to that intended. Understanding this environment can be very helpful in any campaign related to technologies used in the welfare administration, or any government service-orientated institution.

Action points

The following advocacy steps are suggested for civil society in Poland:

  • It is crucial that AI and other new technological innovations are examined from the perspective of social justice and inequalities and address the needs and struggles of marginalised communities. The debate around these systems should focus on their consequences and not efficiencies; people instead of technical details about automation.
  • Learn from what is already being implemented. There is a range of existing technologies and analytical models used by the welfare administration. Organisations that try to engage in the debate about the use of AI can learn from the successes and pitfalls of these models.
  • There is an emerging need to connect different advocacy strategies. In the case of welfare technologies, digital rights activists should join forces with anti-poverty and anti-discrimination organisations and groups that have a greater connection with affected communities.
  • Human rights advocacy should engage a plurality of claims that combine, for example, privacy, anti-discrimination and social rights. Activists should conceptualise and advocate for new ways of political and democratic control and supervision over technologies used in sensitive areas like welfare.

Footnotes

[1]  See, for example: Spielkamp, M. (Ed.) (2018). Automating Society: Taking Stock of Automated Decision Making in the EU. www.algorithmwatch.org/wp-content/uploads/2019/01/Automating_Society_Report_2019.pdf

[2] Ministerstwo Cyfryzacji. (2018). Założenia do strategii AI w Polsce. Warszawa. https://www.gov.pl/documents/31305/436699/Za%C5%82o%C5%BCenia_do_strategii_AI_w_Polsce_-_raport.pdf

[3] Dutton, T. (2018, 28 June). An Overview of National AI Strategies. Medium. https://www.medium.com/politics-ai/an-overview-of-national-ai-strategies-2a70ec6edfd

[4] https://www.sas.com. Considers itself a leader in analytics.

[5] For example, in 2013, IBM was lobbying the city of Lodz (in central Poland) to optimise the governance of social assistance programmes by using data analytics. In: www.archiwum.uml.lodz.pl/get.php?id=12361

[6] See, for example: IBM. (n.d.). Government Health & Human Services Solutions. www.ibm.com/downloads/cas/BAWQZPL2; SAS. (n.d.). SAS provides the DWP with powerful predictive insights in highly complex policy areas. https://www.sas.com/en_gb/customers/dwp.html

[7] In the 1980s, many countries developed so-called expert systems that were an early version of AI. See: Weintraub, J. (1989). Expert Systems in Government Administration. AI Magazine, 10(1). https://pdfs.semanticscholar.org/4c36/058a0d5bde0c1c13db139e9d30246ee5c0dc.pdf

[8] European Commission. (2016). E-Government in Poland. https://joinup.ec.europa.eu/sites/default/files/inline-files/eGovernment_Poland_June_2016_v4_01.pdf

[9] Stolton, S. (2019, 27 June). ‘Adverse impacts’ of Artificial Intelligence could pave way for regulation, EU report says. Euractiv.com. https://www.euractiv.com/section/digital/news/adverse-impacts-of-artificial-intelligence-could-pave-way-for-regulation-eu-report-says

[10] Eubanks, V. (2018). Automating Inequality: How High-Tech Tools Profile, Police and Punish the Poor. New York: St. Martin's Press; Gilliom, J. (2001). Overseers of the Poor: Surveillance, Resistance, and the Limits of Privacy. Chicago: University of Chicago Press. 

[11] Lehmann, A. (2018, 2 March). Lekarka wyrzuciła za drzwi romską dziewczynkę i jej matkę, bo nie miały dokumentów. Gazeta Wyborcza. www.poznan.wyborcza.pl/poznan/7,36001,23092992,lekarka-wyrzucila-za-drzwi-romska-dziewczynke-i-jej-matke-bo.html

[12] Komuda, L. (2016, 5 December). Fakturka za uratowanie życia. 2,5 miliona Polek i Polaków nie ma ubezpieczenia NFZ. Oko press. www.oko.press/fakturka-uratowanie-zycia-25-miliona-polek-polakow-ubezpieczenia-nfz

[13] Narodowy Fundusz Zdrowia. (n.d). eWUŚ - Elektroniczna Weryfikacja Uprawnień Świadczeniobiorców. www.nfz-warszawa.pl/dla-pacjenta/ewus-elektroniczna-weryfikacja-uprawnien-swiadczeniobiorcow  

[14] Cichocka, E. (2013, 23 September). System eWUŚ pozbawia Polaków bezpłatnego leczenia. Gazeta Wyborcza. www.wyborcza.pl/1,76842,14651136,System_eWUS_pozbawia_Polakow_bezplatnego_leczenia.html

[15] Narodowy Fundusz Zdrowia. (n.d.). Co zrobić, gdy eWUŚ wyświetli nas "na czerwono". www.nfz.gov.pl/dla-pacjenta/zalatw-sprawe-krok-po-kroku/co-zrobic-gdy-ewus-wyswietli-nas-na-czerwono

[16] Gangadharan, S., & Niklas, J. (2018). Between Antidiscrimination and Data: Understanding human rights discourse on automated discrimination in Europe. London: London School of Economics. https://eprints.lse.ac.uk/88053/13/Gangadharan_Between-antidiscrimination_Published.pdf

[17] Nyczaj, K. (2016, 2 March). Co dalej z eWUŚ? Medexpress.pl. https://www.medexpress.pl/co-dalej-z-ewus/63134

[18] Niklas, J. (2019, 16 April). Poland: Government to scrap controversial unemployment scoring system. Algorithm Watch. www.algorithmwatch.org/en/story/poland-government-to-scrap-controversial-unemployment-scoring-system

[19] Niklas, J., Sztandar-Sztanderska, K., & Szymielewicz, K. (2015). Profiling the Unemployed in Poland: Social and Political Implications of Algorithmic Decision Making. Warszawa: Fundacja Panoptykon. www.panoptykon.org/sites/default/files/leadimage-biblioteka/panoptykon_profiling_report_final.pdf

[20] Ministerstwo Rodziny, Pracy i Polityki Społecznej. (2019). Analiza Rozwiązań Wprowadzonych Ustawą z Dnia 14 Marca 2014 r. o Zmianie Ustawy o Promocji Zatrudnienia i Instytucjach Rynku Pracy oraz Niektórych Innych Ustaw.

[21] Sejm. (2019). Projekt zmiany ustawy o promocji zatrudnienia I instytucjach rynku pracy. https://orka.sejm.gov.pl/Druki8ka.nsf/0/07BB9C4DDB659D71C12583D10069F1B1/%24File/3363.pdf

[22] Niklas, J., Sztandar-Sztanderska, K., & Szymielewicz, K. (2015). Op. cit.

[23] Trybunal Konstytucyjny. (2018). Zarządzanie pomocą kierowaną do osób bezrobotnych. www.trybunal.gov.pl/postepowanie-i-orzeczenia/komunikaty-prasowe/komunikaty-po/art/10168-zarzadzanie-pomoca-kierowana-do-osob-bezrobotnych

[24] Najwyższa Izba Kontroli. (2016). Realizacja i Wdrażanie Projektu Emp@tia Niklas. https://www.nik.gov.pl/plik/id,11506,vp,13856.pdf; Niklas, J. (2014, 18 April). Ubodzy w prywatność. Fundacja Panoptykon. www.panoptykon.org/wiadomosc/ubodzy-w-prywatnosc

[25] Ministerstwo Rodziny, Pracy i Polityki Spolecznej. (2018). Pojedyncze Usługi Wymiany Informacji udostępnione lub planowane w ramach Centralnego Systemu Informatycznego Zabezpieczenia Społecznego (CSIZS). https://empatia.mpips.gov.pl/documents/10180/1185925/2019-02-01+us%C5%82ugi+pojedyncze+CSIZS.pptx/73dd7dee-fff7-41a8-8145-44eb9f942a55;jsessionid=ba791fc14c89eda480e921ffeb46?version=1.0

[26] https://empatia.mpips.gov.pl

[27] Web forum of social workers. www.public.sygnity.pl/forums/viewforum.php?f=73&sid=058ae5fe15fc8a89c34c3f93a58b0c5d

[28] Malinowska-Misiąg, E., et al. (2016). Algorytmy Podziału Środków Publicznych. www.ibaf.edu.pl/plik.php?id=598

[29] Związek Powiatów Polskich. (2016). Stanowisko w sprawie koniecznych zmian w zakresie dysponowania przez samorządy środkami PFRON. www.zpp.pl/storage/library/2017-06/adedc562e4734b57e47ed71d7235693c.pdf

[30] Sejm. (2018). Interpelacja nr 22410 w sprawie apelu skierowanego przez pracowników warsztatów terapii zajęciowej z województwa małopolskiego. www.sejm.gov.pl/sejm8.nsf/InterpelacjaTresc.xsp?key=5304F8FD

[31] Fundacja Panoptykon. (2019, 14 April). Nieudany eksperyment z profilowaniem bezrobotnych właśnie przechodzi do historii. Fundacja Panoptykon. https://panoptykon.org/wiadomosc/nieudany-eksperyment-z-profilowaniem-bezrobotnych-wlasnie-przechodzi-do-historii 

Notes:
This report was originally published as part of a larger compilation: “Global Information Society Watch 2019: Artificial intelligence: Human rights, social justice and development"
Creative Commons Attribution 4.0 International (CC BY 4.0) - Some rights reserved.
ISBN 978-92-95113-12-1
APC Serial: APC-201910-CIPP-R-EN-P-301
978-92-95113-13-8
ISBN APC Serial: APC-201910-CIPP-R-EN-DIGITAL-302