You are currently browsing the category archive for the ‘Risk and Technology’ category.
Friday, 31 January, 7.30 PM at the Star and Shadow Cinema, Newcastle
In a world stressed by climate change and population growth, the issue of GM crops is again rising to the top of the socio-political agenda. Agri-business, as supported by some scientists, promotes GM as the only way forward.
The Star and Shadow Cinema in Newcastle in collaboration with Durham University’s Institute of Hazard, Risk and Resilience, is holding a film led debate with experts across multiple disciplines to help the community appreciate what is at stake. The discussion will be led by Executive Director of IHRR Professor Sarah Curtis, Professor John Gatehouse, Dr Susana Carro-Ripalda and Dr Philip Garnett. Read the rest of this entry »
Dr Philip Garnett, a researcher on the Tipping Points project, reports on the 30th Chaos Communications Congress he attended in Hamburg, Germany in December that addressed topical issues such as mass surveillance, the future of the Internet and the Snowden affair.
The 30th Chaos Communication Congress (30C3) was always going to have at its heart the events surrounding the leaks of NSA files by Edward Snowden. In some ways I was banking on it. Why else would I drag myself away from Christmas leftovers and warm fires to get on a plane to Hamburg in the cold and rain? I’m glad I did. Amongst the talks on automatic fermentation of beer, and the vacuum tube races around the conference building, was a rich vein of presentations and discussions around (to paraphrase the opening speaker) the nightmare reality to which the community has woken up to from a bad dream.
Despite this, there was still a sense of fun at the congress. There were many stalls where you could enjoy learning to pick a lock, program a board of LEDs with the message of your choice, or check out the latest DIY 3D printers. However there was also a punishing timetable of talks reacting to, discussing, and trying to grasp the new post-Snowden future. One thing they all shared was in common was a sense of disbelief, or speechlessness. It wasn’t the confirmation that we are being monitored exactly (we all knew that) it was perhaps the scale of it, and a sense that nothing can be done to reverse it. Read the rest of this entry »
As concerns about global food security are on the rise, there are many questions as to how the world will face growing demands for a sustainable food supply. While poverty and food distribution seem to underlie many of the challenges regarding food security, biotechnology in the form of genetically modified seeds could continue to play an increasing role in how food is grown and traded in both developed and less developed countries.
Does patenting seeds create new risks to food security or provide a way of securing the world food supply through centralisation? Are we simply looking at a new way of meeting the demands placed upon agriculture or a new way for chemical corporations such as Monsanto, Dupont, Dow Chemical and others to place new demands on society? Most importantly, where does this leave farmers and the communities they support?
GM food is perhaps one of the most controversial topics in the history of science and technology. Genetically-modified foods have been restricted by some countries in Europe, Asia and Latin America, but have also been accepted by others within the same regions such as Brazil, China, Spain and India, and are widespread in the US and Canada. What is often left out of the GM debate is an articulate understanding of the cultural and social contexts that have played a major role in making GM technology so controversial in the first place, especially whether or not it should be used to feed the world.
In many parts of the world the role GM will play in agriculture in the future will depend largely on how it is perceived culturally. The science — nuts and bolts of GM — is obviously important for understanding its possibilities and risks, but it too is grounded within its own political and social contexts. Whether it is genetically modified seeds patented by multinational corporations or the attempt to engineer drought resistant crops, GM technology and human values are intertwined. GMFuturos, a new multidisciplinary research project, will explore some of these complex multiple framings of GM and contribute to scientific and policy debates surrounding GM technology. Read more
Governing scientific and technological innovations is tricky business. This is primarily due to the presence of uncertainty, the risks that society must face if it chooses to intervene using methods that could either have damaging consequences, fail entirely or both. Everyone knows it’s a clique of course, but we really do ‘live in exciting times’ as humanity has at hand an array of advanced technologies at its disposal. But climate change is in a sense antithesis to technological development or at least to how it has proceeded thus far, mostly because the world is locked into using fossil fuels as its primary source of energy. Yet the controversial applications of geoengineering may prove a last resort for reducing the temperature of the planet preventing devastating environmental impacts induced by climate change. Read more
The long-term ecological impacts of the BP oil spill disaster may have recently come to light after scientists and fishermen have discovered fish and crustaceans with skin lesions or other abnormalities. The US Food Drug Administration insists that seafood from the Gulf of Mexico is safe to eat regardless of what abnormalities have been found in species so far. BP also maintains that seafood from the Gulf is as safe to eat as before the oil spill. The US government led by Barack Obama is continuing to move forward with offshore drilling plans for oil and natural gas, including in the Arctic Ocean, bringing to question whether lessons have truly been learnt from the world’s largest oil spill disaster in history. Read the rest of this entry »
Land mines and unexploded ordnances are a serious problem in many parts of the world. They are a painful reminder that some of the most deadly and dangerous hazards are made by people. The use of mines in warfare is also far from over. The governments of Israel, Libya and Myanmar have all been confirmed to be laying anti-personnel mines that are designed to kill people. There are also a number of countries (such as Russia, China and the US) who have not signed the Ottawa Treaty (Anti-Personnel Mine Ban Convention) that was formed in 1997 in Ottawa, Canada to ban the use of landmines.
While the number of casualties caused by mines and unexploded ordnances has gone down since the 1990s when common estimates were 26,000 per year, according to the Landmine and Cluster Munition Monitor, there were more than 5,000 recorded casualties in 2008. In 2009 3,956 casualties were reported and 4,010 recorded for 2010. In 2010, 200km2 of mined areas were cleared by 45 action programmes. More than 388,000 anti-personnel mines and over 27,000 anti-vehicle mines were destroyed during this clearance. This was accomplished by anti-mine programmes in Afghanistan, Cambodia, Croatia, Iraq and Sri Lanka, which accounted for more than 80 percent of recorded clearance. Also, 80 percent of the world´s nations have signed onto the Mine Ban Treaty. Unfortunately, the US was reported to have slowed down its policy review on the treaty last year. But even for the countries that have signed on the rate of compliance for submitting annual transparency reports was at an all-time low of 52 percent. Read more
The Ushahidi platform has been used by people all over the world to report information on large-scale hazard events in real-time. Ushahidi enabled people in Pakistan during the intense flooding in 2010 to report information via SMS about where help was needed. It has also been used to track national elections creating visualisations of data on the web or sent via mobile devices. Ushahidi also created SwiftRiver, which can be used to analyse information about emerging disasters online that are communicated via text messaging.
This is a valuable tool for aid workers as well as journalists who are pressed for time when reporting on an ongoing disaster or crisis. The software platform also has the ability to ‘crowd source’ information to assess trustworthiness of messages or other sources of information coming in. It was used during the aftermath of the 2010 Haitian Earthquake and began in 2008 with tracking the spread of post-election violence in Kenya. While it can be used virtually anywhere, the software is clearly very useful for people in developing countries who have access to the internet and/or use mobile devices.
Ushahidi is also behind CrowdMap which aggregates information about a particular disaster or other event allowing it to be visualised on a map and timeline. CrowdMap does not require any software installation and seems both straightforward and easy to use. What makes CrowdMap and other Ushahidi software platforms particularly powerful for the user is that they can create their own ‘Deployment’ that allows them to report information about a disaster both locally and internationally. Here are a few examples of how CrowdMap has been used to report on disasters. Read more
A news agency specialised in investigative reporting in the US, ProPublica, released an informative series of reports (here, here and here) on the use of body scanning technology by the Transport Security Agency, who is responsible for implementing and regulating travel security measures under the Homeland Security Act of 2002. After its 10th anniversary, many people are wondering what the TSA has actually accomplished in making airports in the US safe from terrorism. The articles focus on the use of body scanners, which are at the focal point of controversies surrounding the TSA. Early on, before the body scanners were first introduced, there was concern as to whether they could pose a significant health risk as the x-ray scanners use ionising radiation that could cause cancer in a minority of airline passengers that pass through them. This is due to the fact that since millions of people enter the scanners the probability of an unfortunate few getting cancer from the machines goes up.
“Even though it’s a very small risk, when you expose that number of people, there’s a potential for some of them to get cancer,” said Kathleen Kaufman, the former radiation management director in Los Angeles County… ProPublica
This health risk is considered low based on the amount of ionising radiation people receive from scanners, which is much less than what they receive while airborne. People receive much larger doses of ionising radiation from being bombarded with cosmic rays when travelling by air at high altitudes. In fact, pilots and flight attendants are actually classified as ‘radiation workers’. According to a study from NASA, flight routes at high latitudes potentially increase radiation exposure to passengers during solar storms. In the case of x-ray body scanners, it is one of a number of risks that air passengers must endure. Read more
After the 10th anniversary of the 9/11 terrorist attacks on the World Trade Center (WTC) and the Pentagon, there is still much work to be done in how security, terror and risk are understood and prepared for in society. Like financial crises, it is not so much a matter of if it will happen, but when it will happen. The goal of increased security intelligence is to mitigate risk of any terrorist attack, but a recent review of national security in the US from the federal and state levels, all the way to the security technologies used by airports, shows that vulnerabilities do exist and they need to be addressed as soon as possible. The 9/11 attacks not only changed how the United States viewed the risk of terror, but resonated with countries throughout the world who have experienced terrorist attacks since that tragic event, including the 7/7 bombings in London, the bombing of commuter trains in Madrid, the Belsan school hostage crisis in Russia that killed 330 people (mostly children) and a whole list of others.
The 9/11 Commission’s report, while literary in tone and revealing of a number of important details about the attacks, still only provides a limited scope of what actually took place before and after the attacks. There were a number of testimonies left out of the report including one given by a former translator for the FBI, Behrooz Sarshar, who said he had knowledge of a ‘kamikaze pilot’ plan to attack the US. He was formally interviewed by the 9/11 Commission who received pressure from the 9/11 Family Steering Committee to take Sarshar’s testimony and although a memorandum of this meeting is available online, it is heavily edited, with much of its content omitted. Sarshar said he had written to FBI Director Robert Mueller twice about what he knew, but did not do so until November 2002 and again in January 2003, long after the attacks. When asked why he waited so long to bring forward this information ‘he said he didn’t want to do any damage to the FBI’. Sarshar’s and other potentially useful testimonies were left out of the 9/11 Commission’s report. Read more