Wednesday, 30 March 2011
Its all in the name, iRobot!!
iRobot Corp. designs robots that perform dull, dirty or dangerous missions in a better way. The company’s proprietary technology, iRobot AWARE, Robot Intelligence Systems, incorporates advanced concepts in navigation, mobility, manipulation and artificial intelligence. This proprietary system enables iRobot to build behavior-based robots, including its family of consumer and military robots.
Tuesday, 29 March 2011
Could AI be used to launch and guide rockets?
Could artificial intelligence help rockets launch themselves? With greater automation, rockets would be capable of self-checking for problems, self-diagnosing, and hopefully, fixing minor pre or post launch issues.
"So far, rockets are merely automatic. They are not artificially intelligent," said Yasuhiro Morita, a professor at Institute of Space and Astronautical Science at JAXA, Japan's aerospace organisation.
However, according to Morita, the Epsilon launch vehicle - tentatively scheduled for a 2013 launch - is slated to include a whole new level of automation.
Currently, modern rockets have some elements of automation, for example, sensors can alert engineers of malfunctions but can't do much to alert them what type of problem it is or what type of solution is needed.
But in the future?
"The AI will diagnose the condition of the rocket, but it is more than that," Morita said. Should there be an issue, "the AI system will determine the cause of a malfunction," and potentially fix the problem itself.
Monday, 28 March 2011
CBA develops system to combat money laundering and terrorism financing
Yerevan hosted an event dedicated to introduction of an Automated Management System of the Center of Financial Monitoring at the Central Bank of Armenia (CBA).
“CBA attaches importance to comprehensive management, safe maintenance and efficient analysis of information related to money laundering and terrorism financing,” CBA Chairman Arthur Javadyan said during the event.
He expressed gratitude to the U.S. authorities for the technical aid to introduce the system and expressed hope that the cooperation between the CBA and U.S. embassy will be continued through implementation of other programs on development of the system on struggle against money laundering and terrorism financing.
U.S. Ambassador to Armenia Marie Yovanovitch said for her part that the Automated Management System is an important project for tackling money laundering and terrorism financing. The most efficient method of the struggle against money laundering is to deprive criminals of the opportunity to manage their incomes, she said.
Representatives of the Office of RA Prosecutor General, MFA, Police, National Security Service, Union of Armenian Banks and AI Partnership organization also participated in the event.
He expressed gratitude to the U.S. authorities for the technical aid to introduce the system and expressed hope that the cooperation between the CBA and U.S. embassy will be continued through implementation of other programs on development of the system on struggle against money laundering and terrorism financing.
U.S. Ambassador to Armenia Marie Yovanovitch said for her part that the Automated Management System is an important project for tackling money laundering and terrorism financing. The most efficient method of the struggle against money laundering is to deprive criminals of the opportunity to manage their incomes, she said.
Representatives of the Office of RA Prosecutor General, MFA, Police, National Security Service, Union of Armenian Banks and AI Partnership organization also participated in the event.
Saturday, 26 March 2011
Artificial Intelligence: Job Killer
Michael Feldman
A recent article in the New York Times points out that sophisticated data analytics software is doing the kinds of jobs once reserved for highly paid specialists. Specifically, it talks about data mining-type software applied to document discovery for lawsuits. In this realm, these applications are taking the place of expensive teams of lawyers and paralegals.
Basically it works by performing deep analysis of text to find documents pertinent to the case at hand. It's not just a dumb keyword search; the software is smart enough to find relevant text even in the absence of specific terms. One application was able to analyze 1.5 million documents for less than $100,000 -- a fraction of the cost of a legal team, and performed in a fraction of the time.
Mike Lynch, founder of Autonomy (a UK-based e-discovery company), thinks this will lead to a shrinking legal workforce in the years ahead. From the article:
The broader point the NYT article illuminates is that software like this actually targets mid-level white collar jobs, rather than low-end labor jobs we usually think of as threatened by computer automation. According to David Autor, an economics professor at MIT, this is leading to a "hollowing out" of the US economy. While he doesn't think technology like this is driving unemployment per se, he believes the job mix will inevitably change, and not necessarily for the better.
It's the post-Watson era. Get used to it.
Basically it works by performing deep analysis of text to find documents pertinent to the case at hand. It's not just a dumb keyword search; the software is smart enough to find relevant text even in the absence of specific terms. One application was able to analyze 1.5 million documents for less than $100,000 -- a fraction of the cost of a legal team, and performed in a fraction of the time.
Mike Lynch, founder of Autonomy (a UK-based e-discovery company), thinks this will lead to a shrinking legal workforce in the years ahead. From the article:
He estimated that the shift from manual document discovery to e-discovery would lead to a manpower reduction in which one lawyer would suffice for work that once required 500 and that the newest generation of software, which can detect duplicates and find clusters of important documents on a particular topic, could cut the head count by another 50 percent.
Such software can also be used to connect chains of events mined from a variety of sources: e-mail, instant messages, telephone calls, and so on. Used in this manner, it can be used to sift out digital anomalies to track various types of criminal behavior. Criminals, of course, are one workforce we'd like to reduce. But what about the detectives that used to perform this kind of work?The broader point the NYT article illuminates is that software like this actually targets mid-level white collar jobs, rather than low-end labor jobs we usually think of as threatened by computer automation. According to David Autor, an economics professor at MIT, this is leading to a "hollowing out" of the US economy. While he doesn't think technology like this is driving unemployment per se, he believes the job mix will inevitably change, and not necessarily for the better.
It's the post-Watson era. Get used to it.
Saturday, 19 March 2011
The past, present and future of cancer
Leading cancer researchers reflected on past achievements and prospects for the future of cancer treatment during a special MIT symposium on Wednesday titled “Conquering Cancer through the Convergence of Science and Engineering.”
The event, one of six academic symposia taking place as part of MIT’s 150th anniversary, focused on the Institute’s role in studying the disease over the past 36 years since the founding of MIT’s Center for Cancer Research.
During that time, MIT scientists have made critical discoveries that resulted in new cancer drugs such as Gleevec and Herceptin. The center has since become the David H. Koch Institute for Integrative Cancer Research, which now includes a mix of biologists, who are trying to unravel what goes wrong inside cancer cells, and engineers, who are working on turning basic science discoveries into real-world treatments and diagnostics for cancer patients.
That “convergence” of life sciences and engineering is key to making progress in the fight against cancer, said Institute Professor Phillip Sharp, a member of the Koch Institute. “We need that convergence because we are facing a major demographic challenge in cancer as well as a number of other chronic diseases” that typically affect older people, such as Alzheimer’s, Sharp said.
In opening the symposium, MIT President Susan Hockfield said that MIT has “the right team, in the right place, at the right moment in history” to help defeat cancer.
“It’s in the DNA of MIT to solve problems,” said Tyler Jacks, director of the Koch Institute. “I’m very optimistic and very encouraged about what this generation of cancer researchers at MIT will do to overcome this most challenging problem.”
Past and present
In the past few decades, a great deal of progress has been made in understanding cancer, said Nancy Hopkins, the Amgen, Inc. Professor of Biology and Koch Institute member, who spoke as part of the first panel discussion, on major milestones in cancer research.
In the early 1970s, before President Richard Nixon declared the “War on Cancer,” “we really knew nothing about human cells and what controls their division,” Hopkins recalled. Critical discoveries by molecular biologists, including MIT’s Robert Weinberg, revealed that cancer is usually caused by genetic mutations within cells.
The discovery of those potentially cancerous genes, including HER2 (often mutated in breast cancer), has lead to the development of new drugs that cause fewer side effects in healthy cells. While that is a major success story, many other significant discoveries have failed to make an impact in patient treatment, Hopkins said.
“The discoveries we have made are not being exploited as effectively as they could be,” Hopkins said. “That’s where we need the engineers. They’re problem-solvers.”
Institute Professor Robert Langer described his experiences as one of the rare engineers to pursue a career in biomedical research during the 1970s. After he finished his doctoral degree in chemical engineering in 1974, “I got four job offers from Exxon alone,” plus offers from several other oil companies. But Langer had decided he wanted to do something that would more directly help people, and ended up getting a postdoctoral position in the lab of Judah Folkman, the scientist who pioneered the idea of killing tumors by cutting off their blood supplies.
In Folkman’s lab, Langer started working on drug-delivering particles made from polymers, which are now widely used to deliver drugs in a controlled fashion.
Langer and other engineers in the Koch Institute are now working on ways to create even better drug-delivery particles. Sangeeta Bhatia, the Wilson Professor of Health Sciences and Technology and Electrical Engineering and Computer Science, described an ongoing project in her lab to create iron oxide nanoparticles that can be tagged with small protein fragments that bind specifically to tumor cells. Such particles could help overcome one major drawback to most chemotherapy: Only about 1 percent of the drug administered reaches the tumor.
“If we could simply take these poisonous drugs more directly to the tumors, it would increase their effectiveness and decrease side effects,” Bhatia said.
Other Koch engineers are working on new imaging agents, tiny implantable sensors, cancer vaccines and computational modeling of cancer cells, among other projects.
Personalized medicine
Many of the targeted drugs now in use came about through serendipitous discoveries, said Daniel Haber, director of the Massachusetts General Hospital Cancer Center, during a panel on personalized cancer care. Now, he said, a more systematic approach is needed. He described a new effort underway at MGH to test potential drugs on 1,000 different tumor cell lines, to find out which tumor types respond best to each drug.
At MIT, Koch Institute members Michael Hemann and Michael Yaffe have shown that patient response to cancer drugs that damage DNA can be predicted by testing for the status of two genes — p53, a tumor suppressor, and ATM, a gene that helps regulate p53.
Their research suggests that such drugs should be used only in patients whose tumors have mutations in both genes or neither gene — a finding that underscores the importance of understanding the genetic makeup of patients’ tumors before beginning treatment. It also suggests that current drugs could be made much more effective by combining them in the right ways.
“The therapies of the future may not be new therapies,” Hemann said. “They may be existing therapies used significantly better.”
The sequencing of the human genome should also help achieve the goal of personalized cancer treatment, said Eric Lander, director of the Broad Institute and co-chair of the President’s Council of Advisors on Science and Technology, who spoke during a panel on biology, technology and medical applications. Already, the sequencing of the human genome has allowed researchers to discover far more cancer-causing genes. In 2000, before the sequence was completed, scientists knew of about 80 genes that could cause solid tumors, but by 2010, 240 were known.
Building on the human genome project, the National Cancer Institute has launched the Cancer Genome Atlas Project, which is sequencing the genomes of thousands of human tumors, comparing them to each other and to non-cancerous genomes. “By looking at many tumors at one time, you can begin to pick out common patterns,” Lander said.
He envisions that once cancer scientists have a more complete understanding of which genes can cause cancer, and the functions of those genes, patient treatment will become much more effective. “Doctors of the future will be able to pick out drugs based on that information,” he said.
The event, one of six academic symposia taking place as part of MIT’s 150th anniversary, focused on the Institute’s role in studying the disease over the past 36 years since the founding of MIT’s Center for Cancer Research.
During that time, MIT scientists have made critical discoveries that resulted in new cancer drugs such as Gleevec and Herceptin. The center has since become the David H. Koch Institute for Integrative Cancer Research, which now includes a mix of biologists, who are trying to unravel what goes wrong inside cancer cells, and engineers, who are working on turning basic science discoveries into real-world treatments and diagnostics for cancer patients.
That “convergence” of life sciences and engineering is key to making progress in the fight against cancer, said Institute Professor Phillip Sharp, a member of the Koch Institute. “We need that convergence because we are facing a major demographic challenge in cancer as well as a number of other chronic diseases” that typically affect older people, such as Alzheimer’s, Sharp said.
In opening the symposium, MIT President Susan Hockfield said that MIT has “the right team, in the right place, at the right moment in history” to help defeat cancer.
“It’s in the DNA of MIT to solve problems,” said Tyler Jacks, director of the Koch Institute. “I’m very optimistic and very encouraged about what this generation of cancer researchers at MIT will do to overcome this most challenging problem.”
Past and present
In the past few decades, a great deal of progress has been made in understanding cancer, said Nancy Hopkins, the Amgen, Inc. Professor of Biology and Koch Institute member, who spoke as part of the first panel discussion, on major milestones in cancer research.
In the early 1970s, before President Richard Nixon declared the “War on Cancer,” “we really knew nothing about human cells and what controls their division,” Hopkins recalled. Critical discoveries by molecular biologists, including MIT’s Robert Weinberg, revealed that cancer is usually caused by genetic mutations within cells.
The discovery of those potentially cancerous genes, including HER2 (often mutated in breast cancer), has lead to the development of new drugs that cause fewer side effects in healthy cells. While that is a major success story, many other significant discoveries have failed to make an impact in patient treatment, Hopkins said.
“The discoveries we have made are not being exploited as effectively as they could be,” Hopkins said. “That’s where we need the engineers. They’re problem-solvers.”
Institute Professor Robert Langer described his experiences as one of the rare engineers to pursue a career in biomedical research during the 1970s. After he finished his doctoral degree in chemical engineering in 1974, “I got four job offers from Exxon alone,” plus offers from several other oil companies. But Langer had decided he wanted to do something that would more directly help people, and ended up getting a postdoctoral position in the lab of Judah Folkman, the scientist who pioneered the idea of killing tumors by cutting off their blood supplies.
In Folkman’s lab, Langer started working on drug-delivering particles made from polymers, which are now widely used to deliver drugs in a controlled fashion.
Langer and other engineers in the Koch Institute are now working on ways to create even better drug-delivery particles. Sangeeta Bhatia, the Wilson Professor of Health Sciences and Technology and Electrical Engineering and Computer Science, described an ongoing project in her lab to create iron oxide nanoparticles that can be tagged with small protein fragments that bind specifically to tumor cells. Such particles could help overcome one major drawback to most chemotherapy: Only about 1 percent of the drug administered reaches the tumor.
“If we could simply take these poisonous drugs more directly to the tumors, it would increase their effectiveness and decrease side effects,” Bhatia said.
Other Koch engineers are working on new imaging agents, tiny implantable sensors, cancer vaccines and computational modeling of cancer cells, among other projects.
Personalized medicine
Many of the targeted drugs now in use came about through serendipitous discoveries, said Daniel Haber, director of the Massachusetts General Hospital Cancer Center, during a panel on personalized cancer care. Now, he said, a more systematic approach is needed. He described a new effort underway at MGH to test potential drugs on 1,000 different tumor cell lines, to find out which tumor types respond best to each drug.
At MIT, Koch Institute members Michael Hemann and Michael Yaffe have shown that patient response to cancer drugs that damage DNA can be predicted by testing for the status of two genes — p53, a tumor suppressor, and ATM, a gene that helps regulate p53.
Their research suggests that such drugs should be used only in patients whose tumors have mutations in both genes or neither gene — a finding that underscores the importance of understanding the genetic makeup of patients’ tumors before beginning treatment. It also suggests that current drugs could be made much more effective by combining them in the right ways.
“The therapies of the future may not be new therapies,” Hemann said. “They may be existing therapies used significantly better.”
The sequencing of the human genome should also help achieve the goal of personalized cancer treatment, said Eric Lander, director of the Broad Institute and co-chair of the President’s Council of Advisors on Science and Technology, who spoke during a panel on biology, technology and medical applications. Already, the sequencing of the human genome has allowed researchers to discover far more cancer-causing genes. In 2000, before the sequence was completed, scientists knew of about 80 genes that could cause solid tumors, but by 2010, 240 were known.
Building on the human genome project, the National Cancer Institute has launched the Cancer Genome Atlas Project, which is sequencing the genomes of thousands of human tumors, comparing them to each other and to non-cancerous genomes. “By looking at many tumors at one time, you can begin to pick out common patterns,” Lander said.
He envisions that once cancer scientists have a more complete understanding of which genes can cause cancer, and the functions of those genes, patient treatment will become much more effective. “Doctors of the future will be able to pick out drugs based on that information,” he said.
Friday, 18 March 2011
Is government ready for the semantic Web?
So far it's been slow going, but an interagency XML project could boost law enforcement, health care efforts
- By John Moore
- Mar 18, 2011
In addition to being able to work out the answer to questions for Watson such as what fruit trees provide flavor to Sakura cheese, semantic technology is capable of providing answers to questions that might interest government agencies and other groups that historically have had problems identifying patterns or probable sequences in oceans of data.
The idea is to help machines understand the context of a piece of information and how it relates to other bits of content. As such, it has the potential to improve search engines and enable computer systems to more readily exchange data in ways that could be useful to agencies involved in a wide range of pursuits, including homeland security and health care.
While semantic technology has mostly been an academic exercise in recent years, it is now finding a greater role in a practical-minded government project called the National Information Exchange Model (NIEM).
NIEM pursues intergovernment information exchange standards with the goal of helping agencies more readily circulate suspicious activity reports or issue Amber Alerts, for example. The goal is to create bridges, or exchanges, between otherwise isolated applications and data stores.
The building of those exchanges calls for a common understanding of the data changing hands. The richer detail of semantic descriptions makes for more precise matches when systems seek to consume data from other systems. Agreement on semantics also promotes reuse; common definitions let agencies recycle exchanges.
Semantics in government IT
Today, NIEM offers a degree of semantic support. But some observers believe the interoperability effort will take a deeper dive into semantic technology. They view NIEM as a vehicle that could potentially make semantics a mainstream component of government IT.
“Semantically, there is a huge opportunity with NIEM,” said Peter Doolan, vice president and chief technology officer at Oracle Public Sector, which is working on tools for NIEM. “NIEM is a forcing function for the broader adoption of the deeper semantic technology that we have talked about for some time.”
As more agencies adopt NIEM, the impetus for incorporating semantics will grow. NIEM launched in 2005 with the Justice and Homeland Security departments as the principal backers. Last year, the Health and Human Services Department joined Justice and DHS as co-partners. State and local governments, particularly in law enforcement, have taken to NIEM as well. And in a move that underscores that trend, the National Association of State Chief Information Officers last month joined the NIEM executive steering committee.
“NIEM adoption is going at a furious pace,” said Mark Soley, chairman and CEO of the Object Management Group (OMG), which has been working with NIEM. “As it gets adoption, they are going to need a way to translate information that is currently in other formats. That is when you need semantic descriptions.”
NIEM’s leadership says the program is prepared for greater use of semantics. “The NIEM program stands ready to respond to the overall NIEM community regarding a broader adoption of semantic technologies,” said DHS officials who responded to questions via e-mail.
Support for semantics
NIEM is based on XML. The project grew out of the Global Justice XML Data Model (GJXDM), a guide for information exchange in the justice and public safety sectors. Although XML serves as a foundational technology for data interoperability, it is not necessarily viewed as semantic.
However, John Wandelt, principal research scientist at the Georgia Tech Research Institute (GTRI) and division chief of that organization’s Information Exchange and Architecture Division, said semantic capability has been part of NIEM since its inception. GTRI serves as the technical architect and lead developer for GJXDM and NIEM.
“From the very early days, the community has pushed for strong semantics,” he said. Wandelt pointed to XML schema, which describes the data to be shared in an exchange. “Some say schema doesn’t carry semantics,” he said. "But the way we do XML schema in NIEM, it does carry semantics.”
NIEM’s Naming and Design Rules help programmers layer an “incremental set of semantics on top of base XML,” Wandelt said. For example, a group of XML programmers tasked to build a data model of their family trees would depict relationships between parents, siblings, and grandparents. But those ties would be implied and based entirely on an individual programmer’s way of modeling.
NIEM’s design rules, on the other hand, provide a consistent set of instructions for describing connections among entities. Wandelt said those roles make relationships explicit, thereby boosting semantic understanding.
NIEM also uses Resource Description Framework (RDF), an important underpinning of the Semantic Web, which has been slowly making its way into government IT (see sidebar).
RDF aims to describe data in a way the helps machines better understand relationships.
see the full article here: http://gcn.com/articles/2011/03/21/niem-and-semantic-web.aspx
Thursday, 17 March 2011
Monday, 14 March 2011
Collective Intelligence Outsmarts Artificial Intelligence
When computers first started to infringe on everyday life, science fiction authors and society in general had high expectations for "intelligent" systems. Isaac Asimov's "I, Robot" series from the 1940s portrayed robots with completely human intelligence and personality, and, in the 1968 movie "2001: A Space Odyssey," the onboard computer HAL (Heuristically programmed ALgorithmic computer) had a sufficiently human personality to suffer a paranoid break and attempt to murder the crew!
While the computer revolution has generally outstripped almost all expectations for the role of computers in society, in the area of artificial intelligence (AI), the predictions have, in fact, outstripped our achievements. Attempts to build truly intelligent systems have been generally disappointing.
Fully replicating human intelligence would require a comprehensive theory of consciousness which we unfortunately lack. Therefore, AI has generally attempted to focus on simulating intelligent behavior, rather than intelligence itself. In the algorithmic approach, programmers labor to construct sophisticated programs that emulate a specific intelligent behavior, such as voice recognition. In the other traditional approach - expert systems - a database of facts is collected, and logical routines applied to perform analysis and deduction. Expert systems have had some success in medical and other diagnostic applications, such as systems performance management.
Each of these approaches has shown success in limited scenarios, but neither achieves the sort of broadly intelligent system promised in the early days of computing. Attempts to emulate more human-like cognitive or learning systems-using technologies such as the neural nets, fuzzy logic, and genetic algorithms-have only slightly improved the intelligence of everyday software applications.
Most of us experience the limitations of artificial intelligence every day. Spell-checkers in applications such as Microsoft Word do an amazingly poor job of applying context to language correction. As a result, sentences such as, "Eye have a spelling checker, it came with my pea sea," pass through the Microsoft spelling and grammar checker without a hitch. While the Microsoft software can recognize spelling mistakes in individual words, it cannot understand the meaning of the sentence as a whole, and the result is a long way from intelligent judgment.
Collective intelligence offers a powerful alternative to traditional artificial intelligence paradigms. Collective intelligence leverages the inputs of large numbers of individuals to create solutions that traditional approaches cannot achieve. Although the term "collective intelligence" is not widely recognized, most of us experience the results of collective intelligence every day. For instance, Google uses collective intelligence when auto-correcting search inputs. Google has a large enough database of search terms to be able to automatically detect when you make an error and correct that error on-the-fly. Consequently, Google is more than able to determine that "pea sea" is almost certainly meant to be "PC."
Collective intelligence not only allows for superior spelling and grammar correction, but also is used in an increasingly wide variety of contexts, including spam detection, diagnostic systems, retail recommendations, predictive analytics, and many other fields. Increasingly, organizations find that it is more effective to apply brute force algorithms to masses of data generated by thousands of users, than to attempt to explicitly create sophisticated algorithmic models.
The ability of collective intelligence to solve otherwise intractable business and scientific problems is one of the driving forces behind the "big data" evolution. Organizations are increasingly realizing that the key to better decision making is not better programs but granular crowd-sourced data sets.
Collective intelligence is merely one of the techniques used to endow computer systems with more apparent intelligence and to better solve real world problems - it's not in any way a replacement for the human brain. However, in an increasingly wide range of applications, collective intelligence is clearly outsmarting traditional artificial intelligence approaches.
While the computer revolution has generally outstripped almost all expectations for the role of computers in society, in the area of artificial intelligence (AI), the predictions have, in fact, outstripped our achievements. Attempts to build truly intelligent systems have been generally disappointing.
Fully replicating human intelligence would require a comprehensive theory of consciousness which we unfortunately lack. Therefore, AI has generally attempted to focus on simulating intelligent behavior, rather than intelligence itself. In the algorithmic approach, programmers labor to construct sophisticated programs that emulate a specific intelligent behavior, such as voice recognition. In the other traditional approach - expert systems - a database of facts is collected, and logical routines applied to perform analysis and deduction. Expert systems have had some success in medical and other diagnostic applications, such as systems performance management.
Each of these approaches has shown success in limited scenarios, but neither achieves the sort of broadly intelligent system promised in the early days of computing. Attempts to emulate more human-like cognitive or learning systems-using technologies such as the neural nets, fuzzy logic, and genetic algorithms-have only slightly improved the intelligence of everyday software applications.
Most of us experience the limitations of artificial intelligence every day. Spell-checkers in applications such as Microsoft Word do an amazingly poor job of applying context to language correction. As a result, sentences such as, "Eye have a spelling checker, it came with my pea sea," pass through the Microsoft spelling and grammar checker without a hitch. While the Microsoft software can recognize spelling mistakes in individual words, it cannot understand the meaning of the sentence as a whole, and the result is a long way from intelligent judgment.
Collective intelligence offers a powerful alternative to traditional artificial intelligence paradigms. Collective intelligence leverages the inputs of large numbers of individuals to create solutions that traditional approaches cannot achieve. Although the term "collective intelligence" is not widely recognized, most of us experience the results of collective intelligence every day. For instance, Google uses collective intelligence when auto-correcting search inputs. Google has a large enough database of search terms to be able to automatically detect when you make an error and correct that error on-the-fly. Consequently, Google is more than able to determine that "pea sea" is almost certainly meant to be "PC."
Collective intelligence not only allows for superior spelling and grammar correction, but also is used in an increasingly wide variety of contexts, including spam detection, diagnostic systems, retail recommendations, predictive analytics, and many other fields. Increasingly, organizations find that it is more effective to apply brute force algorithms to masses of data generated by thousands of users, than to attempt to explicitly create sophisticated algorithmic models.
The ability of collective intelligence to solve otherwise intractable business and scientific problems is one of the driving forces behind the "big data" evolution. Organizations are increasingly realizing that the key to better decision making is not better programs but granular crowd-sourced data sets.
Collective intelligence is merely one of the techniques used to endow computer systems with more apparent intelligence and to better solve real world problems - it's not in any way a replacement for the human brain. However, in an increasingly wide range of applications, collective intelligence is clearly outsmarting traditional artificial intelligence approaches.
Sunday, 13 March 2011
Artificial intelligence has just got smarter
Rajeev Srinivasan
The American TV quiz showJeopardy! has been running for over 25 years. Contestants are given clues in categories ranging from serious subjects such as World War II, to more frivolous topics like rock musicians. They then have to come up with a question in the format: “Who is…”, or “what is…” based on the clues. The clues are not straightforward and factual — a computer with a large database can crack such statements quickly — but oblique. They are full of puns, obscure relationships, jokes, allusions and so on that only a human being steeped in that culture will recognise. In that sense, the clues are not ‘context-free’ as computer languages are (or for that matter, classical Paninian Sanskrit): you must know quite a bit of cultural context to decode them.
This is infernally hard for computers, and a challenge that artificial intelligence (AI) researchers have been struggling with for decades — the holy grail of ‘natural language processing’. There have been several false starts in AI, and enthusiasm has waxed and waned, but the iconic promise of computers that can converse (such as the talking computer HAL in 2001: A Space Odyssey) has remained elusive.
This is why it is exciting news that a new IBM program (dubbed ‘Watson’ after the founder of the company), built specifically to play Jeopardy, defeated two of the world’s best human players in a special edition of the show on February 16th. There was some quiet satisfaction among the techie crowd that the day may yet arrive when intelligent robots can respond to conversational queries. Watson runs on a cluster of ninety Linux-based IBM servers, and has the horsepower to process 500 gigabytes of data (the equivalent of a million books) per second — which is necessary to arrive at an answer in no more than 3 seconds; that is the time human champions need to press the buzzer that would give them the right to answer the question.
Ray Kurzweil, an AI pioneer and futurist, suggests this level of computing power will be available in a desktop PC in about a decade. Watson’s accomplishments are qualitatively different from those of its predecessor, Deep Blue, which defeated world chess champion Garry Kasparov in 1977. In many ways, chess, with its precise rules, is much easier for computers than the loose and unstructured Jeopardy! game. Thus, Watson is much more complex than Deep Blue, which stored the standard chess openings, and did a brute-force analysis of every possible outcome a few moves into the future. The interesting question though, is, what does all this mean for humans? The nightmare possibility is that we have reached that tipping point where humans will become redundant. That of course was the precise problem that 2001: A Space Odyssey’s HAL had - it felt the humans on board its spaceship were likely to cause the mission to fail; therefore it methodically set about eliminating them. Much the same dystopic vision haunts us in other science-fiction films: for instance the omniscient Skynet in The Terminator series or the maya-sustaining machines in The Matrix.
Berkeley philosopher John Searle, writing in the Wall Street Journal, gives us some comfort. According to him, Watson is merely a symbol-manipulating engine, and it does not have superior intelligence; nor is it ‘thinking’. It merely crunches symbols, i.e. syntax, with no concept of meaning, i.e. semantics. “Symbols are not meanings,” he concludes, “Watson did not understand the questions, or its answers… nor that it won — because it doesn’t understand anything.”
Even without becoming our overlords, Watson and its descendents may cause displacement. They will cause a number of jobs to disappear, just as voice recognition is affecting the transcription industry. Former head-fund manager Andy Kessler suggests in the WSJ that there are several types of workers, but basically ‘creators’ and ‘servers’; only the former are safe.
Technology such as Watson will, he says, not only disrupt retail workers (eg. travel agents), bureaucrats, stockbrokers and customer support staff, but also legal and medical professionals. The latter may find applications like a doctor’s or lawyer’s assistant increasingly cutting into their job content. Thus the arrival of Watson-like artificial intelligences may cause serious disruption in the workforce, although it is not likely that they will be ordering us around any day soon. At least not yet. Humanity may be more resilient than we thought. |
Tuesday, 8 March 2011
Robots Achieve Self-Awareness, May Also Develop ‘Mental Problems’
Artificial intelligence has taken a big leap forward: two roboticists (Lipson and Zagal), working at the University of Chile, Santiago, have created what they claim is the first robot to possess “metacognition” — a form of self-awareness which involves the ability to observe ones’ own thought processes and thus alter one’s behavior accordingly.
The starfish-like robot (which has but four legs) accomplished this mind-like feat by first possessing two brains, similar to how humans possess two brain hemispheres (left and right*). This provided the key to the automaton’s adaptability within a dynamic, and unpredictable, environment.
The double bot brain was engineered such that one ‘controller’ (i.e., one brain) was “rewarded” for pursuing blue dots of light moving in random circular patterns, and avoiding running into moving red dots. The second brain, meanwhile, modeled how well the first brain did in achieving its goal.
But then, to determine if the bot had adaptive self-awareness, the researchers reversed the rules (red dots pursued, blue dots avoided) of the first brain’s mission. The second brain was able to adapt to this change by filtering sensory data to make red dots seem blue and blue dots seem red; the robot, in effect, reflected on its own “thoughts” about the world and modified its behavior (in the second brain), fairly rapidly, to reflect the new reality.
This achievement represents a significant advancement over earlier successes with AI machines in which a robot was able to model its own body plan and movements in its computer brain, make “guesses” as to which of its randomly selected body-plan models was responsible for the correct behavior (movement), and then eliminate all the unsuccessful models, thus exhibiting an “analogue” form of natural selection (see Bongard, Zykov, Lipson, 2006). **
The team is already moving beyond this apparent meta-cognition stage and is attempting to enabled a robot to develop what’s known as a ‘theory of mind’ – the ability to “know” and predict what another person (or robot) is thinking. In an early experiment, the team had one robot observe another robot moving in a semi-erratic manner (in a spiral pattern) in the direction of a light source. After a short while, the observer bot was able to predict the other’s movement so well that it was able to “lay a trap” for it.
Lipson believes this to be a form of “mind reading”. However, a critic might argue that this is more movement reading, than mind, and that it remains to be proven that the observer bot has any understanding of the other’s “mind”. A behavior (such as the second bot trapping the first) might simulate some form of awareness of another’s thought process, but can we say for sure that this is what is really happening?
One idea that might lend credence to this claim is if the observer bot had a language capacity that allowed it to express its awareness, or ‘theory of mind’. Nearly two decades ago, pioneering cognitive biologists Maturana and Varela posited “Language is the sin qua non of that experience called mind.”
And, achieving such a “languaging” capacity in not out of the question; a few years ago, a team of European roboticists created a community of robots that not only learned language, but soon learned to invent new words and to share these new words with the other robots in the community (see: Luc Steels, of the University of Brussels/SONY Computer Science Laboratory in Paris).
It is conceivable that a similarly equipped robot — also possessing the two-brain structure of Lipson’s robots — could observe itself thinking about thinking, and express this awareness through its own (meta) language. Hopefully, we will be able to understand what it is trying to express when and if it does.
In a recent SciAm article on this topic, Lipson stated:
“Our holy grail is to give machines the same kind of self-awareness capabilities that humans have”
One other question that remains, then: Will the robot develop a more complex simulation/awareness of itself, and the world, as it learns and interacts with the world, as we do?
The four-legged, robot also exhibited another curious behavior: when one of its legs was removed (so that it had to relearn to walk) , it seemed to show signs of what is known as phantom limb syndrome, the sensation ta one still has a limb though it is in fact missing (this is common in people who have lost limbs in war or accidents). In humans, this syndrome represent a form of mental aberration or neurosis (perhaps even an hallucination). A robot acting in this way — holding a false notion of itself — may give scientists and AI engineers a glimpse into robot mental illness.
A robot with a mental illness or neurosis? Yes, this seem entirely likely given the following three theorems:
1] Neurosis is accompanied (and is perhaps a function of) acute self-awareness; the more self-aware, the more potentially neurotic one becomes.
2} Robots with advanced heuristics (enabled by multiple brains, self-simulators and sensor inputs) will inevitably develop advanced self-awareness, thus the greater potential for 1] above.
3] There is an ancient, magickal maxim: Like begets like. The creator is in the created (in Biblical terms: “God made man in his own image.”
Mayhaps the ‘Age of Spiritual Machines‘ could become an ‘Age of Neurotic Machines‘ (or Psychotic Machines, depending on your view of humans), too. So then, f this is be the fate of I, Robot, let’s do our droid druggs a favor and engineer a robo-shrink, or, at least, a good self-help program…and a love for Beethoven.
Monday, 7 March 2011
Managing Free Text Archives with Linguistic Semantics
Semantic natural language processing interprets the meaning of free text and enables users to find, mine and organize very large archives quickly and effectively. Linguistic semantic processing finds all and only the desired information because it determines meaning in context and maps synonym and hyponym relationships. It avoids assigning incorrect relationships because meaning is precisely determined. At the same time it makes all the desired connections exhaustively because it is backed by a massive lexicon and semantic map. The key scalably of Cognition's linguistic semantic processing is bottom-up interpretation of the text, finding the meaning of words and phrases in the local context one at a time. The technology has a semantic map and algorithms that interpret language linguistically rather statistically, so that the meaning of a given document is independently determined. As a result the methods scale to a theoretically unlimited number of documents. Linguistic semantic NLP is being deployed in many applications that facilitate rapid and accurate management of very large archives. . 1. Free auto categorization - Texts are categorized into an existing ontology or a special client-defined ontology according to the salient concepts in them. 2. Segregation by genre - The software determines which of a predetermined set of genres a documents falls into. In the legal domain, the genre set might be "contracts" and within "contracts", "employment contract", "services contract", etc., PPM, "pricing proposal", "mortgage agreement", and so on.
3. Conceptual foldering (or tagging) - Documents are placed in conceptual folders using conceptual Boolean expressions that cover all of the topics desired for the folders. This is especially useful in e-Discovery, where documents can be culled leaving only the relevant portion to be reviewed.
4. Intelligent search - The semantic search function retrieves almost all and only the desired documents. Very high precision is achieved by disambiguating words in context, and by phrasal reasoning. Very high recall is achieved by paraphrase and ontological reasoning.
5. Text Analytics - Calculating the frequency and salience of words, word senses, concepts and phrases in a document or document base lays bare its significant semantic content.
6. Sentiment analysis - With semantic processing, sentiments can be determined. Existing lexical resources identify the "pejorative" and "negative" words.
7. Language monitoring - In some situations such as child chat or email, certain types of language may need to be blocked. Linguistic semantic processing detects undesirable (or desirable) language as defined by administrators.
3. Conceptual foldering (or tagging) - Documents are placed in conceptual folders using conceptual Boolean expressions that cover all of the topics desired for the folders. This is especially useful in e-Discovery, where documents can be culled leaving only the relevant portion to be reviewed.
4. Intelligent search - The semantic search function retrieves almost all and only the desired documents. Very high precision is achieved by disambiguating words in context, and by phrasal reasoning. Very high recall is achieved by paraphrase and ontological reasoning.
5. Text Analytics - Calculating the frequency and salience of words, word senses, concepts and phrases in a document or document base lays bare its significant semantic content.
6. Sentiment analysis - With semantic processing, sentiments can be determined. Existing lexical resources identify the "pejorative" and "negative" words.
7. Language monitoring - In some situations such as child chat or email, certain types of language may need to be blocked. Linguistic semantic processing detects undesirable (or desirable) language as defined by administrators.
Kathleen Dahlgren has a Ph.D. in Linguistics and a Post-Doc in Computer Science from UCLA. She has worked and contributed publications in computational linguistics for over 20 years. Her publications cover topics in sense disambiguation, question-answering, relevance, coherence and anaphora resolution. Her book, Naive Semantics for Natural Language Understanding primarily treats a method for representing commonsense knowledge and lexical knowledge, and how this can be used in sense disambiguation and discourse reasoning. The software offered at Cognition Technologies is patented by Kathleen Dahlgren and Edward P. Stabler, Jr., and has been under development for a number of years, so that it now has a wide coverage semantic map of English.
Sunday, 6 March 2011
Reflections on Watson the Computer
By Sally Blount / Kellogg School of Management
The gap between human and artificial intelligence seems to be getting smaller... on Feb. 16, IBM's “Watson” computer outsmarted two Jeopardy champions.
A recent edition of TIME magazine explored our quest for human perfection and the rapidly emerging human-technology interface. And the current issue of Atlantic magazine reports the ever-closer results of the Turing Test—which determines whether a human or computer program can hold the most human-like conversation for five minutes.
As I read about these technological advancements, I can't help thinking that, if given a chance, I would love to have a chip planted in my brain that would help me remember names. I meet so many people every day from across our 60,000-person community of students, administrators, faculty, alumni and corporate partners. I would feel so much better and be more effective if, with a little help from technology, I could remember everybody's names every time I saw them.
But then I begin to wonder: With that chip implanted, would I become progressively worse at naturally remembering names? I'm not sure I like that idea. . . and then I can't help but think, what is being human about, anyway? Is it really about each of us trying to become more perfect,each in our own way, or is there some broader, less individually-focused aim?
Once we create computers and performance-enhanced humans that can outperform real humans (by 2045, as TIME predicts), will we have found jobs and eradicated poverty for the billion-plus among us who live on less than $2 a day? Will we have the infrastructure in place to provide every human on the planet with access to clean water and a warm bed? Will we have found deterrents to dramatically reduce, if not halt,the black market for sex trafficking? If the answer to these questions is “yes,” then these technological advancements will be of true value to humanity. But I have a terrible feeling that in 2045 the answers will still be a resounding “no.”
That's because there are some human limitations that technology is far from being equipped to fix. It can't overcome limitations that we ourselves don't know how to solve. One of our most glaring challenges is our collective inability to build effective organizations—organizations that consistently and reliably perform in a way that exemplifies the best of human performance and values. Each day's news reinforces this truth—in the Middle East, Washington,Mexico, and in corporate, government and religious headquarters around the world—as startling and saddening revelations emerge about flawed and corrupt organizations.
If we really want to change the world, we need to put more resources into studying and enhancing our shared human capabilities at building organizations—be they firms, government agencies or NGOs. There are many pressing questions:What are the barriers that deter us? Can we develop and use technology in ways that can counter these barriers? What political and social infrastructure do we need to support organization building? What individual-level skills are needed to equip organization-builders and change agents in established bureaucracies? How does leadership rhetoric help us on this road?
Until we become as good at building — and sustaining — effective organizations as we are good at computer programming, we will never realize our full human potential.
Semantic Technologies Bear Fruit In Spite of Development Challenges
Despite the complexities associated with semantic technologies, efforts to adopt the approach for drug development are bearing fruit, according to several presentations at last week's Conference on Semantics in Healthcare and Life Sciences in Cambridge, Mass.
In a conversation with BioInform, Ted Slater, head of knowledge management services at Merck and the CSHALS conference chair, described this year's meeting as "the strongest program" in the four years of its existence.
"Four years ago ... nobody [really] knew about [semantics]," Slater said. "Now we are at the point where we're talking about ... expanding the scope a little bit [and asking,] 'What else can we add into the mix to make it a more complete picture?'"
This year's conference began with a series of hands-on tutorials coordinated by Joanne Luciano, a research associate professor at Rensselaer Polytechnic Institute, that were intended to show how the technology can be used to address drug development needs.
During the tutorials, participants used semantic web tools to create mashups using data from the Linked Open Data cloud and semantic data that they created from raw datasets. Participants were shown how to load data into the subject-predicate-object data structure dubbed the "triple store;" query it using the semantic query language SPARQL; use inference to expand experimental knowledge; and build dynamic visualizations from their results.
Luciano told BioInform that this was the first year that CSHALS offered practical tutorials and the response from participants was mostly positive. Furthermore, the tutorials were made available for users in the RDF format so that “we were in real time, during the tutorial, able to run parallel tracks to meet all the needs of the tutorial participants,” she said.
While it's clear to proponents that semantic technology adds value to data, several speakers at the conference indicated that there is room for improvement and that much of the community remains unaware of the advantages that the semantic web offers.
For example, Lawrence Hunter, director of the computational bioscience program and the Center for Computational Pharmacology at the University of Colorado, pointed out that the field is still lacking good approaches to enable "reasoning" or, in other words, to figure out how "formal representations of data can get us places that simple search and retrieval wouldn’t have gotten us."
During his presentation, John Madden, an associate professor of Pathology at Duke University, highlighted several factors that need to be considered in efforts to "render" information contained in medical documents, such as laboratory reports, physician's progress notes, admission summaries, in the RDF format.
A major challenge for these efforts, he said, is that these documents contain a lot of "non-explicit information" that’s difficult to capture in RDF such as background medical domain knowledge; the purpose of the medical document and the intent of the author; "hedges and uncertainty"; and anaphoric references, which he defined as "candidate triples where it's unclear what the subject is."
Yet despite its complexities, many researchers are finding useful applications for the technology. For example, Christopher Baker of the University of New Brunswick described a prototype of a semantic framework for automated classification and annotation of lipids.
The framework is comprised of an ontology developed in OWL-DL that uses structural features of small molecules to describe lipid classes; and two federated semantic web services deployed within the SADI framework, one of which identifies relevant chemical "subgraphs" and a second that “assigns chemical entities to appropriate ontology classes.”
Other talks from academic research groups described an open source software package based on Drupal that can be used to build semantic repositories of genomics experiments and a semantics-enabled framework that would keep doctors abreast of new research developments.
Creating Uniformity
Semantic technologies are also finding their way into industry. Sherri Matis-Mitchell, principal informatics scientist at AstraZeneca, described the first version of the firm’s knowledgebase, called PharmaConnect, which was released last October and integrates internal and external data to provide connections between targets, pathways, compounds, and diseases.
Matis-Mitchell explained that the tool allows users to conduct queries across multiple information sources "using unified concepts and vocabularies." She said that the idea behind adopting semantic technologies at AstraZeneca was to shorten the drug discovery timeframe by bringing in "knowledge to support decision-making" earlier on in the development process.
The knowledgebase is built on a system called Cortex and receives data from four workstreams. The first is chemistry intelligence, which supports specific business questions and can be used to create queries for compound names and structures. The second is competitive intelligence, which provides information about competing firms' drug-development efforts, while the final two streams are disease intelligence, used to assess drug targets; and drug safety intelligence.
In a separate presentation, Therese Vachon, head of the text mining services group at the Novartis Institutes for Biomedical Research, described the process of developing a federated layer to connect information stored in multiple data silos based on "controlled terminologies" that provide "uniform wording within and across data repositories."
Is the Tide Turning?
At last year's CSHALS, there was some suggestion that pharma's adoption of semantic methods was facing the roadblocks of tightening budgets, workforce cuts, and skepticism about the return on investment for these technologies (BI 03/05/2010)
Matis-Mitchell noted in an email to BioInform that generally new technologies take time to become widely accepted and that knowledge engineering and semantic technologies are no different.
She said her team overcomes this reluctance by regularly publishing its "successes to engender greater adoption of the tools and methods." While she could not provide additional details about these successes in the case of PharmaConnect for proprietaty reasons, she noted that the "main theme" is that it "helped to save time and resources and supported more efficient decision making."
However some vendors now feel that drug developers may be willing to give semantic tools a shot and are gearing up to provide products that support the technology.
In one presentation, Dexter Pratt, vice president of innovation and knowledge at Selventa, presented the company's Biological Expression Language, or BEL, a knowledge representation language that represents scientific findings as causal relationships that can be annotated with information about biological context, experimental methods, literature sources, and the curation process.
Pratt said that Selventa plans to release BEL as an open source language in the third quarter of this year and that it will be firm's first offering for the community.
Following his presentation, Pratt told BioInform that offering the tool under an open source license is "consistent" with Selventa's revised strategy, announced last December, when it changed its name from Genstruct and decided to emphasize its role as a data analysis partner for drug developers (BI 12/03/2010).
To help achieve this vision Selventa "will make the BEL Framework available to the community to promote the publishing of biological knowledge in a form that is use-neutral, open, and computable" Pratt said .adding that the company's pharma partners have been "extremely supportive" of the move.
Although the language has already been implemented in the Genstruct Technology Platform for eight years, In preparation for it's official release in the open source space, Selventa's developers are working to develop a "new build" of the legacy infrastructure that's " formalized, revised, and streamlined."
"Four years ago ... nobody [really] knew about [semantics]," Slater said. "Now we are at the point where we're talking about ... expanding the scope a little bit [and asking,] 'What else can we add into the mix to make it a more complete picture?'"
This year's conference began with a series of hands-on tutorials coordinated by Joanne Luciano, a research associate professor at Rensselaer Polytechnic Institute, that were intended to show how the technology can be used to address drug development needs.
During the tutorials, participants used semantic web tools to create mashups using data from the Linked Open Data cloud and semantic data that they created from raw datasets. Participants were shown how to load data into the subject-predicate-object data structure dubbed the "triple store;" query it using the semantic query language SPARQL; use inference to expand experimental knowledge; and build dynamic visualizations from their results.
Luciano told BioInform that this was the first year that CSHALS offered practical tutorials and the response from participants was mostly positive. Furthermore, the tutorials were made available for users in the RDF format so that “we were in real time, during the tutorial, able to run parallel tracks to meet all the needs of the tutorial participants,” she said.
While it's clear to proponents that semantic technology adds value to data, several speakers at the conference indicated that there is room for improvement and that much of the community remains unaware of the advantages that the semantic web offers.
For example, Lawrence Hunter, director of the computational bioscience program and the Center for Computational Pharmacology at the University of Colorado, pointed out that the field is still lacking good approaches to enable "reasoning" or, in other words, to figure out how "formal representations of data can get us places that simple search and retrieval wouldn’t have gotten us."
During his presentation, John Madden, an associate professor of Pathology at Duke University, highlighted several factors that need to be considered in efforts to "render" information contained in medical documents, such as laboratory reports, physician's progress notes, admission summaries, in the RDF format.
A major challenge for these efforts, he said, is that these documents contain a lot of "non-explicit information" that’s difficult to capture in RDF such as background medical domain knowledge; the purpose of the medical document and the intent of the author; "hedges and uncertainty"; and anaphoric references, which he defined as "candidate triples where it's unclear what the subject is."
Yet despite its complexities, many researchers are finding useful applications for the technology. For example, Christopher Baker of the University of New Brunswick described a prototype of a semantic framework for automated classification and annotation of lipids.
The framework is comprised of an ontology developed in OWL-DL that uses structural features of small molecules to describe lipid classes; and two federated semantic web services deployed within the SADI framework, one of which identifies relevant chemical "subgraphs" and a second that “assigns chemical entities to appropriate ontology classes.”
Other talks from academic research groups described an open source software package based on Drupal that can be used to build semantic repositories of genomics experiments and a semantics-enabled framework that would keep doctors abreast of new research developments.
Creating Uniformity
Semantic technologies are also finding their way into industry. Sherri Matis-Mitchell, principal informatics scientist at AstraZeneca, described the first version of the firm’s knowledgebase, called PharmaConnect, which was released last October and integrates internal and external data to provide connections between targets, pathways, compounds, and diseases.
Matis-Mitchell explained that the tool allows users to conduct queries across multiple information sources "using unified concepts and vocabularies." She said that the idea behind adopting semantic technologies at AstraZeneca was to shorten the drug discovery timeframe by bringing in "knowledge to support decision-making" earlier on in the development process.
The knowledgebase is built on a system called Cortex and receives data from four workstreams. The first is chemistry intelligence, which supports specific business questions and can be used to create queries for compound names and structures. The second is competitive intelligence, which provides information about competing firms' drug-development efforts, while the final two streams are disease intelligence, used to assess drug targets; and drug safety intelligence.
In a separate presentation, Therese Vachon, head of the text mining services group at the Novartis Institutes for Biomedical Research, described the process of developing a federated layer to connect information stored in multiple data silos based on "controlled terminologies" that provide "uniform wording within and across data repositories."
Is the Tide Turning?
At last year's CSHALS, there was some suggestion that pharma's adoption of semantic methods was facing the roadblocks of tightening budgets, workforce cuts, and skepticism about the return on investment for these technologies (BI 03/05/2010)
Matis-Mitchell noted in an email to BioInform that generally new technologies take time to become widely accepted and that knowledge engineering and semantic technologies are no different.
She said her team overcomes this reluctance by regularly publishing its "successes to engender greater adoption of the tools and methods." While she could not provide additional details about these successes in the case of PharmaConnect for proprietaty reasons, she noted that the "main theme" is that it "helped to save time and resources and supported more efficient decision making."
However some vendors now feel that drug developers may be willing to give semantic tools a shot and are gearing up to provide products that support the technology.
In one presentation, Dexter Pratt, vice president of innovation and knowledge at Selventa, presented the company's Biological Expression Language, or BEL, a knowledge representation language that represents scientific findings as causal relationships that can be annotated with information about biological context, experimental methods, literature sources, and the curation process.
Pratt said that Selventa plans to release BEL as an open source language in the third quarter of this year and that it will be firm's first offering for the community.
Following his presentation, Pratt told BioInform that offering the tool under an open source license is "consistent" with Selventa's revised strategy, announced last December, when it changed its name from Genstruct and decided to emphasize its role as a data analysis partner for drug developers (BI 12/03/2010).
To help achieve this vision Selventa "will make the BEL Framework available to the community to promote the publishing of biological knowledge in a form that is use-neutral, open, and computable" Pratt said .adding that the company's pharma partners have been "extremely supportive" of the move.
Although the language has already been implemented in the Genstruct Technology Platform for eight years, In preparation for it's official release in the open source space, Selventa's developers are working to develop a "new build" of the legacy infrastructure that's " formalized, revised, and streamlined."
Subscribe to:
Posts (Atom)