The discussion about digital rights has been mainly centered on civil and political rights. Nevertheless, the emergence of social networks, communication platforms and connected devices, the appearance of new technologies and digital services directed to the most vulnerable populations information, and the existence of daily-updated algorithms that can predict choices based on data collected over time, introduce new points of vulnerability of data privacy and challenge the defense of human rights (and digital rights) in the 21st century.
However, recent efforts have started to push off a perspective that analyses digital rights beyond data privacy and freedom of expression and brings into sharp focus the impact new technologies can have on socio-economic rights, and more specifically the right to health. A recent case in England represents a good example of this phenomenon. Early this year, the English Home Office and National Health Services released a plan resembling a controversial data-sharing scheme that enabled the British government to start accessing medical records as part of the welfare assessment process. A program capable of accessing individual health data to help the government determine how much welfare support each person is entitled can rapidly evolve in a national health data practice that would damage the doctor-patient relationship, and “deter vulnerable people from seeking medical assistance when they need it” as it might disclose immigration status and other sensitive personal information from patients. Examples like this one highlight that digital rights and robust protective legal frameworks beyond Facebook leaks it’s an imperative more than a whim.
Data rights matter especially in the global health context, where diseases and information spread and cross borders fast and easily. While some data collection efforts can be well-intended and oriented to treat better specific patients (e.g. medical sensors on interactive toys for kids) or to develop faster medical studies (e.g. speeding up information gathering for monitoring and assessing risk over infectious diseases), there is fragile regulation and limited quality assurance frameworks to keep people´s personal health data safe from abuses and misuses. While there are some solutions as anonymization methods to address the need to share personal health information for surveillance purposes in the age of epidemics like Ebola, the practice is not generalized, and most of the published data collection examples have dealt with datasets that were not adequately anonymized.
Recent HIV epidemic surveillance innovation provides us with an example. A recently released HIV prediction model (basically an algorithm) to identify potential PrEP candidates in an extensive healthcare system developed by researchers at Harvard and Kaiser Permanente Northern California, opened the discussion about the vast risk this kind of innovations can have over patient’s information. The model with 44 predictors and factors could be certainly accurate to predict the risk of HIV infection and can be an answer to the limitations in the implementation of pre-exposure prophylaxis (PrEP) in potential patients. Nonetheless, it also poses a significant challenge over legislation that needs to protect citizens, not only from diseases but also from discrimination and exclusion in health settings, insurance companies, and governments. Legal systems around the world have now to consider the ramifications of storing diseases-related personal data, even if it’s only the result of an academic prediction process, especially for cases where the data chain of custody could be at risk. Let´s think over the potential use of this information by less traditional actor’s activities like banks credit risk assessment, immigrations systems personal evaluation, and potential employer’s background checking.
The Digital Freedom Fund identified three overlapping phases for societies to manage the change and challenges created by new technologies. The first one referred to ethics and conventions has had much activity since 2016, as developed countries acknowledge the existence of threats (together with benefits) from the rapid emergence of artificial intelligence technology and its social and economic implications. Multiple committees and partnerships have been established between the public and private sectors to define the “good” and “bad”, and to develop ethic guidelines to set some limits. I believe this phase isn’t over at all as new technologies and algorithms emerge every day, and yesterday responses are just inputs but not permanent solutions. As health technologies keep developing in the form of cellphone applications, assistive technology, and care robots for the elderly, and new platforms to transport patients like Uber Health rapidly spread among the population, conventions, and ethics enter into a never-ending reshaping phase. We should be careful with this ethical continuous redefinition process, and human right can be a compass guiding of all our efforts.
Standards and regulation constitute the second phase identified. The creation of new institutions, recommendations, and toolkits started in 2018 and is still under wide development globally. Accountability frameworks and data protection rules in Europe are great examples; however, there is more to be done. As Human Rights Watch has pointed out, human rights in the digital age demand not only in-depth and comprehensive legislation but also the existence of a UN Rapporteur whose work represents the international community “concern about the human rights costs and consequences of unchecked mass surveillance.” Legislators around the world must give the necessary attention to the collection of personal health information. There must be, at minimum, legal o policy provisions that guarantee the right of people to access transparent information about data collection, processing, and storage in a clear and understandable language, as well as the right to object, or to request the erasure of their data. In the absence of international legal provisions mainly directed to data protection and health, human rights experts and legislators should also use principles from previous international legal standards. For example, the Convention on the Rights of Persons with Disabilities recognizes the importance of access to assistive technology and affirms that “assistive technology is essential to enable persons with disabilities to live independently (art. 19) and to participate fully in all aspects of life (art. 29). The same Committee on the Rights of Persons with Disabilities, in its draft general comment No. 5 on article 19, clarifies that persons with disabilities should retain control over their living arrangements (autonomy) and that involuntary placement and treatment is incompatible with the Convention. An interpretation that includes any impositions on technologic care remains incompatible with the principle of autonomy and would, therefore, run counter to the Convention. Indeed, principles like dignity, autonomy, equality, nondiscrimination, participation, and inclusion which are included in this Convention are basic elements to consider once international (and national) legal systems start developing personal health data protection.
This second phase also requires the establishment of health departments and specialists inside national protection agencies. There is a clear need to both interpret the existing and emerging regulation in light of the “right to the highest attainable standard of physical and mental health”, and provide with resources, rulemaking authority and sufficient enforcement powers any institution that protects people´s health personal information. Together with the latter, there must be algorithm governance rules to guarantee unbiased and fair data practices, promote data privacy innovation for health, and limit government access to personal data. As Public Citizen recently affirmed, “the use of secret algorithms based on individual data permeates our lives,” so societies need to establish limits to automated decision-making as artificial intelligence is used to determine eligibility for health insurance and other life necessities.
There is much to be done yet on the regulatory phase and until this step hasn’t arrived at its pinnacle, the third phase, composed by awareness campaigns and litigation initiatives, won’t emerge in full to show that actual legal changes had happened and are applicable. Awareness campaigns and litigation against algorithms and artificial intelligence cannot appear until there is a legal framework to build strong cases and precedents for human rights defenders in different fronts but especially with regard to violations to the right of protection of personal health data. Nonetheless, before we wait for the first case, we can build strong collaborations among digital rights scholars, advocates, and activists, and start developing advocacy strategies for both dealing with problems we already face, like unethical and sloppy ways in which companies treat our data, and, keeping private and public parties accountable.
We should also be especially wary of recent government programs that, like in the UK example mentioned above, look for the “datafication” of vulnerable populations with the objective to allocate better health and care resources. As recently warned by Special Rapporteur of the United Nations on extreme poverty and human rights, the poor are often the testing ground for governments wanting to introduce new technologies. In sum, we must advocate for controls over big data in healthcare and open data which are currently offering the best scenario for health innovation while also posing new challenges around ownership of personal data and commercial exploitation of health information.