Sign in or Join FriendFeed
FriendFeed is the easiest way to share online. Learn more »

Jason Hoyt › Comments

Roderic Page
Mendeley API: we'll bring the awesome if you bring the documentation - http://iphylo.blogspot.com/2010...
HI Roderic, We hear you loud and clear. I think our developers have responded to your questions on twitter, the Google Group, and private email. If not, please let me know. We would have loved if you had participated in our beta API program to help spot some of these bugs. The duplicates you are seeing are actually not a bug in the API, but rather part of the process in how we clean... more... - Jason Hoyt
@Jason: can the API also be used without registering an app? - Egon Willighagen
AJCann
Mendeley’s one-click web importer now saves webpage snapshots - http://www.mendeley.com/blog...
I'm far from convinced this is a good thing. Importation of non-peer reviewed web pages into Mendeley will reduce the utility of the database by polluting tags with less valuable links. This is NOT what I want to use Mendeley for. If anyone one else wants to use Mendeley in this way, that's fine, but it reduces the utility of the service for me by introducing extraneous noise into formerly valuable tags and searches. - AJCann
There are some journals... like old articles in BMJ that are on;y available as html and not as PDFs. I can see that this might be a solution...possibly. But I have generally not found the Mendeley importer to be that useful..... actually I am probably going to buy Endnote for home computer for doctorate as Mendeley still too many weaknesses at present. - Anne Marie Cunningham
I signed up for a mendeley account but while I wait for the email (which has yet to arrive) what are the pros and cons versus CiteULike and Zotero? Which (if any) would you recommend for (a) students (b) researchers? - Chris Jobling
Mendeley has two components, a website plus desktop software which you need to install. CiteULike is entirely web based. Both are cross platform and cross browser and not linked to Firefox like Zotero. Beyond that, it's a matter of personal preference plus finding an active existing community which is of value to you. - AJCann
AJ: Interesting point. Perhaps we should look into excluding saved snapshots from tag link results; and/or reducing their prominence in search results. Anne Marie: Which are the main weaknesses you're finding at the moment? Maybe I can tell you when they'll be fixed or when they're addressed on our roadmap. Chris: The e-mail should pretty much arrive instantly - have you checked your spam folder? If it's not there, please let me know - I can generate an account verification link. - Victor / Mendeley Team
I can understand why people may want to use one bookmarking service for everything, but my experience is that the value of services is improved if they are somewhat more specialised. - AJCann
AJ: I agree, and we see our specialisation as providing research management tools for desktop and web. More and more documents that are relevant for academics are websites - research papers in HTML format, newspaper/magazine articles, reports, blog posts etc, and these can now be saved (and full-text searched) in Mendeley alongside other research material. Our goal is definitely not to become a general-purpose bookmarking tool like del.icio.us. - Victor / Mendeley Team
Victor ... thanks for the response. No, still no email, and nothing in my spam folder either. C.P.Jobling@Swansea.ac.uk is my address. - Chris Jobling
AJ: I can see the value of the community features of a bibliography tool for researchers, but what would you recommend to undergraduate students? A personal database like EndNote (or EndNote Web), or something else. In an introduction to web 2.0 tools for research, I tell my engineering research dissertation students about delicious and Zotero. I haven't used CiteULike or Mendeley yet. What do you think of BB Scholar! - Chris Jobling
We get our first year students to use delicious since they rarely access primary academic literature, and second year students to use CiteULike (because we have local problems with the Mendeley software install) in preparation for final year independent research. - AJCann
Zotero already does this well. It's a useful feature of some bookmarking sites, but I agree that it will clutter up Mendeley. (I use CiteULike and cross-post with Mendeley, to make sure I never lose any links.) - Naomi
One person's garbage is another person's data :) @AJ - Totally get your point about the Web page snapshots. As Victor stated, we'll be introducing filters so that if Web snapshots are not in your research interest, then you do not have to be bothered with the added noise. For others, it is an important research tool and even data for research, so it will still be available. - Jason Hoyt
Sounds like Christmas has arrived early for spammers :-) Of course, we (CiteULike) have exactly the opposite problem - we don't allow web pages into our public views at all, even bona fide ones. - Fergus Gallagher
And I have to say that is one of the features I really value about CiteULike (after early unfavourable experiences with Connotea). - AJCann
I had a similar unpleasant experience with Connotea. Mendeley doesn't have an open web form that bots can submit, so it may be a little protected. It's definitely something to keep a close eye on. Anne Marie - the developers have been forwarded your comments from earlier, so you should hear back soon. - Mr. Gunn
for folks farther up the thread - if you're trying to decide among the various options, Martin Fenner compares and contrasts them so you can make a better decision. You could also talk to your librarian. - Christina Pikas
@MrGunn I think it would be a mistake to assume that you're protected. If you're targeted, your bookmarklet is easily-reverse-engineered spammer heaven (it's just a form-post after all). We get a surprising number of quite sophisticated targeted spamming attempts. Actually, I think the vast majority is hand-posted. - Fergus Gallagher
An aside, first... does anyone else find it annoying that if you click on comment on FF before logging in then you are taken to a screen where the only option is to set up an account even if you have one already. In January, on starting my doctorate I decided to make an attempt to try and work paperlessly using Mendeley to annotate PDFs rather than print off hard copies. This works to a... more... - Anne Marie Cunningham
I'll just add my voice to the spam/noise comment. The specialization into research papers was the reason I have been moving from connotea to citeulike. You should at least define the entries in a way that is easy for users to just ignore entries that are not research papers. - Pedro Beltrao
I think indicating the content type makes sense as well. Obviously, in any system that displays search results, there are going to be those who attempt to game the results. Measures are being taken to address not only outright spam, but legitimate articles which have had some some SEO-type tweaking of the titles, abstract, and body text. The comparison chart Christina mentions is here:... more... - Mr. Gunn
Jean-Claude Bradley
Binfield: article level metrics should be reduced to a single number for tenure committees - I really don't agree with this #scspn
Nor do I, but what he probably meant is to recognize that tenure committees are not willing to deal with much more than one number (let alone read articles or even check out wiki edits). - Daniel Mietchen
If you focus on that you will discourage people from repurposing their OA articles. - Jean-Claude Bradley
That's the core issue though ... measurement of scientific impact is more than just a single number. - Walter Jessen
Agree again, and think that education about metrics, as Heather pointed out, will be needed. - Daniel Mietchen
Researchers need to educate their committees in their applications - and include support from their colleagues - Jean-Claude Bradley
If you try to reduce to a single number how long before gaming the system with bots - Jean-Claude Bradley
not if it gets converted to a single number without people actually looking at the multiple values :) - Jean-Claude Bradley
Single numbers = onedimensional metrics. We could do this and call it the "number for the intellectually challenged": NIC ? :-) - Björn Brembs
In a scientific world where PCA is about the top of multivariate analysis... yes, NIC would do. - Egon Willighagen
Indeed. my point is that although a large basket of metrics is preferable, we live in the real world in which people want a single number, and without one they may never move away from the Impact Factor. Therefore, **perhaps** a single number (at the article level) would be a starting point in trying to change hearts and minds. I am certainly not saying that I think a single number is a good thing though, and rest assured we wont be creating one.It's up to the Academy what they do with the data... - Peter Binfield
Also note that in the real world tenure applicants need to digest and explain their accomplishments for the committee, which would certainly include any metrics, including IF. - Jean-Claude Bradley
@Peter... are there any plans to include citation numbers for the papers the paper for which the numbers are calculated? More valuable than blog counts, not? Details and context in my blog: http://chem-bla-ics.blogspot.com/2010... - Egon Willighagen
Not sure I understand the question - Peter Binfield
Say the citation pattern: C cites B, B cites A. Given metric for A... now #B papers are counted, but not #C papers.... if B uses functionality in A, and C cites B, then this is a partial cite of A... CDK use case in my blog: BRENDA enzyme database uses CDK functionality (fingerprint), BRENDA is cited 241 times, CDK itself a mere 115 times... Should the CDK papers not inherited a... more... - Egon Willighagen
Ah, I see. We take all our citation data direct from 3rd parties (Scopus, CrossRef, PMC) so unless they have that data, and can provide it easily, then we couldnt have it either. Also, have you ever looked at the methodology behind Eigenfactor ? - Peter Binfield
When they just come into existence I did... but from my memory, it was mostly oriented at journal impact, not article impact... that changed? Need to check it out again then... - Egon Willighagen
Nope, I think it is still journal level, but they incorporate the similar concept of weighting based on where the citations are coming from. At the end of the day it is a massive network analysis problem which I assume would get exponentially harder when you move from the journal to the article level. - Peter Binfield
Not sure... it's just a set of linear equations... finding the solution is iterative, and they can stop at any convergence criterion they like... probably, very much like PageRank... - Egon Willighagen
We covered the Eigenfactor in my cheminfo class - see FAQ16 http://getcheminfo.wikispaces.com/FAQ - Jean-Claude Bradley
Egon, isn't what you are talking about here effectively page rank for papers? You estimate influence by traversing the network. Being cited in e.g. one BRENDA paper makes a small difference to overall rank but being cited by even a relatively small (~10) of highly cited papers has a much bigger effect than simply having lots of citations from uncited papers. - Cameron Neylon
Yes, quite so. I guess anyone can do this, but it depends on the availability of public citation networks... - Egon Willighagen
Yes, public data is a big problem. Which is lucky since I just happen to have a proposal to build federated repositories of citation data in my back pocket if anyone would like to fund it.... - Cameron Neylon
Laughing at Cameron's last comment, because I know it's true. Wish I had the money, Cameron. - Jason Hoyt
It's alright Jason, you guys are on my list of people to target to help us get it funded... - Cameron Neylon
The German Research Foundation (DFG) this week announced that starting in July they only want to see the 5 best papers in a CV for a grant application. Very big step forward. - Martin Fenner from iPhone
@mfenner wondering what their definition is of "best" - suelibrarian
@ suelibrarian, they phrase it as something like "those most pertinent to the research proposed", which reads between the lines as basically those containing the "preliminary data". Anyway, the policy may well have been in place last July already, and I followed it when submitting a proposal in November. - Daniel Mietchen
I think we need as many numbers as there are concepts being measured. "Impact" is definitely multi-dimensional, so we need multiple dimensions. That said, probably not 15 dimensions (but probably yes 15 (or more!) datapoints to inform the several dimensions). By understanding more what the different dimensions are and why they matter, we can help tenure committees etc understand which ones they should care about. - Heather Piwowar
Jean-Claude has a great point about openness early and often decreasing the traditional "impact" metrics, because the impact is spread across preprints and articles for different audiences. Ideally we come up with metrics that reflect this. somehow. :) - Heather Piwowar
when I served on our university personnel committee for faculty a few years ago, we NEVER discussed metrics while deliberating on renewal/tenure cases (impact factor, number of citations, etc.), nor was it a factor for decision-making. Has this suddenly changed in academia and now metrics are really important? - Elizabeth Brown
Thanks, Dorthea, I do remember that discussion now. I see more interest/awareness of metrics in the sciences here, not necessarily as a tenure tool but more as a personal tool to gauge impact of your work. Some faculty count citations, but I don't see much interest in impact factors or alternative metrics. I haven't seen lots of evidence that US funding agencies are using IF or quantitative metrics for decision-making. The one NSF grant application I was involved with recently didn't ask for any metrics. - Elizabeth Brown
During my last participation in a tenure committee we did not explicitly discuss IF, although the candidate did use the IF as an argument to show the quality of his publication record. - Jean-Claude Bradley
The VIB institute in Belgium has periodical goals for their PIs where they have to meet a certain number of publications in specific IFs .. The sort of X papers above 5 IF , Y papers above 10IF. In my home country (Portugal) it used to be the total number of papers which lead people to collect stamps but is is shifting now to citations and IFs. One good example in europe appears to be... more... - Pedro Beltrao
Shirley Wu
Overheard regarding papers published in PLoS ONE - "it was rejected somewhere else", "The bar is 'not crackpot'", "people publish in CNS because that's where the attention is, I don't know anyone who reads PLoS ONE", "The reputation of the journal is a good way to filter out noise". Is there truth to these claims? Discuss.
Almost all papers, in all journals, have been rejected from somewhere else. Our bar is "is it science, is it conducted properly, is it reported properly, and do the conclusions follow the data etc" - the bar is not "is it sexy, or impactful, or a major advance". At the same time, we are not CNS - as we are not selective. CNS combined publish just 5,000 articles a year between them but... more... - Peter Binfield
Peter, I certainly don't disagree with what you're saying and think PLoS ONE is valuable and innovative. But I was wondering if these negative judgments are pervasive (FF/twitter is a bit of an echo chamber and the real world can be a shock sometimes) and if so how to change them. There are those who argue that CNS has high precision even if they miss some good papers and so it's more... more... - Shirley Wu
The problem with echo chambers is that the Internet echoes forever; and forever is a long time. We just need to push out as much positive info as possible to try to combat any negative comments which may have been made rashly, or in error, but which get re-referenced for eternity. Our article-level metrics program will presumably show people whether any given paper in PLoS ONE is 'high'... more... - Peter Binfield
There's this thing known as FUD. Happens when someone sees their status eroding. The whole PLoS articles are not as good is just that, FUD - Deepak Singh
Indeed, it may be fear, uncertainty, doubt. It may also be lack of information and hard data. We are going to fix the latter. Certainly, people are voting with their feet - we have 37,000 published authors in under 3 years, and people are publishing with us in ever increasing numbers (http://poeticeconomics.blogspot.com/2009... ) - Peter Binfield
"it was rejected somewhere else" - perhaps. This is hard to tease out, but I have a feeling that most of the manuscripts that come to PLoS ONE have never been submitted elsewhere - Bora Zivkovic
"The bar is 'not crackpot' - good bar, IMHO. Why is any other bar necessary? Think. Really. - Bora Zivkovic
"people publish in CNS because that's where the attention is, I don't know anyone who reads PLoS ONE" - who still reads journals? Srsly? Don't people search online for papers they are interested in? Do physicists read biology papers when their copy of Nature arrives? No, they read Nature for "news and views". - Bora Zivkovic
"The reputation of the journal is a good way to filter out noise" - perhaps a century ago when every scientist could read every scientific paper and understand it, and every scientist was a 'Victorian scholar' who felt the need to keep up with ALL of science. Today, you read papers in your narrow field - you find them online. News from other sciences you can find in pop-sci magazines, on blogs, etc. - Bora Zivkovic
@Bora, I think there are still a fair number of people who don't search for papers necessarily, but browse TOCs, and so only browse the journals they're familiar with. During the discussion, someone asked, baffled, "but there are already so many papers [without PLoS ONE publishing so many more], how would people find ones of interest to them??" - Shirley Wu
A related discussion - based on a correspondence in Nature by a proponent of the views Shirley cites - is at http://ff.im/4GWlM . - Daniel Mietchen
@Shirley - although ToCs are certainly an important discovery tool, any publisher will tell you that the vast majority of their usage comes in from Google (who then read an article, and leave again to run another search) - Peter Binfield
@Peter, that would make sense, but I'm wondering if that necessarily translates into Google being the majority of people's preferred method for finding papers. At least the impression I got from folks in my lab was "so many papers, so little time" and so they're skeptical of anything that adds to the glut of papers without clearly adding value. They might agree on the principle that... more... - Shirley Wu
They also think, "if [a peer reviewer] didn't make a value judgment on whether this paper is significant, why should I waste time reading it?" - Shirley Wu
"I'm wondering if that necessarily translates into Google being the majority of people's preferred method for finding papers" - Good point. I guess you would want to measure time spent on page by people who come via the 2 (or more) routes to see how targeted their interest was - Peter Binfield
@Shirley - Then they are admitting that they would prefer one (or perhaps 2 or 3) other people to decide what is important for them, and so decide on their behalf what they should be reading. Doesnt sound like a very informed way to filter imho... - Peter Binfield
@Peter, more that a million people access papers through google 25% of the time will mean that publishers see google as a huge source of traffic, but doesn't mean that people think of google as their preferred method to find _NEW_ papers. - Shirley Wu
@Peter, well, it's using expert opinion. We all use it to some extent in areas we're not familiar with. If people aren't that internet savvy or aren't that organized, they depend on other people or name-brand journals to bring things to their attention. Also, commenting on papers hasn't really taken off yet - just a matter of time, probably - but it just means that the post-peer review process hasn't really proven its value yet. - Shirley Wu
"but it just means that the post-peer review process hasn't really proven its value yet." - indeed, and we DONT view our efforts as post-pub peer review. We view it as a new way to do post-pub evaluation / filtering / discovery. - Peter Binfield
Oh, the other thing that someone mentioned was "comments are valuable" - meaning "why would I give away my intellectual capital?" People are willing to share their comments with their labs or close colleagues, but not to the public or to the general scientific community. Is this just another mindset we combat with positivity and action? How to combat the vicious cycle of "no comments, so no value", "no value, so i won't comment"? - Shirley Wu
"Is this just another mindset we combat with positivity and action?" - I would say we combat it by showing them the power of being open about these things. For example, social bookmarking only works when everyone shares their bookmarks - in this example there is a clear benefit to both contribute and use. If people realised that by leaving comments they would be advancing science;... more... - Peter Binfield
@Daniel, ah yes, I remember that thread now. Unfortunately I think many scientists are similar in mindset to the letter writer. They don't know about or understand new ways of receiving content, which might seem strange to those of us here, but there are many more people out there than are in here. - Shirley Wu
Also, you could use the same argument about peer review: "My time and thoughts are valuable, why should I do peer review". Apparently academia feel that the quid pro quo works in that situation at least (and that is done anonymously!) - Peter Binfield
@Peter, true, though I think some of that is tied to the reputation of the journal again - being a reviewer for Nature > reviewer for PLoS ONE (in their eyes), editors know them, they can talk about it and gain status. They get tangible and subtle career boosts. Whereas commenting on papers online and publishing in PLoS ONE doesn't get someone tenure (yet). "It would be very brave and... more... - Shirley Wu
Peter - given the problems that journals have finding suitable reviewers, I would hesitate a bit calling that a working system. - Daniel Mietchen
@Daniel :) - Peter Binfield
Another link that may be useful reposting here: Pubfeed at http://pubfeed.cs.toronto.edu/ basically allows you to treat the whole web of scholarly articles like a TOC alert (just a bit more customizable) and pipe that into your preferred feed reader. - Daniel Mietchen
@Shirley - please dont forget that there are 25,000 journals in the world and millions of papers published per year. CNS is just 3 titles, and if you lump together all similar titles (highly exclusive, professional editors, well known brands, conferring 'bragging rights' on anyone who works with them) then you are still talking about just a handful of the titles, with a small percentage of the content. We need a system that works for everyone, not just a small sub-set - Peter Binfield
@Peter, oh, I'm well aware, just parlaying bits of an impromptu debate I had earlier today with people who don't see the value of venues like PLoS ONE. These are all arguments they make, and while I don't agree with them, it is tough to convince people - Shirley Wu
You could try asking them exactly how many downloads their last paper in a 'high impact' journal got... - Peter Binfield
Fair enough, but you know, I really don't think they think about that. They think "what will be in my CV?" and they think any journal that is somewhat competitive [includes other PLoS journals, BMC journals, etc] looks better than one that accepts anything that's methodologically sound. Again, not my view, but perhaps one that is held by many. Do people list # of downloads on their CV for publications? - Shirley Wu
They dont, because they dont have the data. However, people do list if their paper was rated by F1000; or if BMC designated it a 'highly accessed' article. So I think they will start to say "this paper was downloaded 5000 times in the first 3 months which put it in the top x% of all PLoS ONE articles, the top y% of all PLoS articles, and the top z% of ALL articles" (when the rest of the world starts quoting this data) - Peter Binfield
I think of it as trying to set up a social experiment. If I'm right and a more cooperative model can produce better science than the current hypercompetitive structure, then over the next decade or so, facility with new methods and metrics that center on Open practices will provide a competitive edge for some researchers, and unwillingness to change will put others at a disadvantage. We... more... - Bill Hooker
And isn't that the scientific way? - Deepak Singh from IM
"I think there are still a fair number of people who don't search for papers necessarily, but browse TOCs" Could it be that those are the people publishing in CNS and miss the most important papers for their work? http://bjoern.brembs.net/comment... It's only one example, as anecdotal as it gets, but it shows two things: 1. CNS 'quality' is merely correlational and highly noisy.... more... - Björn Brembs
My environment is rather positive about PLoS ONE. We all know about the difference between relevance and quality. While many PLoS ONE papers might not be of widespread interest, the review process is of comparable quality or better to smaller conferences and e.g. high volume BMC journals. Other journals have severe issues with reviewer quality these days, and it seems to become worse. - Roland Krause
I'm still wondering about the degree of scalability of post-publication (significance) peer review systems. Is it really realistic to think that once (all) journals go OA and implement such a system that the entire scientific community will benefit? Assuming that it's "fair" for all journals to get equal amount of attention from "scholarly feedback communities", how can we encourage... more... - Wobbler
I agree with Bill Hooker's statement just above. Those who echo Shirley's original quote will be at a disadvantage, which means better odds for the Shirley's of the research world. - Jason Hoyt
I often say something along the lines of what Bill said. The environment is changing. To succeed in the new environment, one has to change not just one's publishing habits, but also rethink how to do research and how to write it. Thus, people who think about it early on will be able to gain advantage over people who are still stuck in the old ways of doing things. As the new environment... more... - Bora Zivkovic
Martin Fenner
Peter Binfield: Scientific Publishing in 5-10 years
1. Do current publishers exist? 2. Does the journal exist as a package? 3. Does the article exist? 4. What business models dominate? 5. What new technical features do we seriously expect? 6. What new modes of scholarly communication may gain wide acceptance? - Martin Fenner
Over 0-5 years probably much the same as it is now - but 5-10 years a much more itneresting timeframe - Cameron Neylon
Do we need articles when people just want to look at the data? - Kubke
3D in PDFs has become available only recently. - Martin Fenner
who is actually doing the science - is it still professional scientsits or is it much more diverse - Cameron Neylon
Other questions: What money exists in 5-10 years? Who is doing the actual work of publishing? What are the customers? - Martin Fenner
What will happen to society journals in 5-10 years? These journals bring in a substantial amount of revenue to the societies. - Martin Fenner
Average annual price increase of journals: 5-10%. - Martin Fenner
40-45% of revenue currently comes from U.S. academic institutions. - Martin Fenner
Anyone else having trouble getting audio from Cameron's livecast? (Will comment at livestream.com from now on). - Michael Nielsen
Who is paying for the pay-per-article? - Martin Fenner
"Even at 99 cents an article, there will still only be an X number of buyers" translation - lower revenues for publisher to go the route of iTunes - Jason Hoyt
Most people think that the scientific article will be around in 5-10 years. - Martin Fenner
NLM-DTD is mentioned as a technical standard: http://dtd.nlm.nih.gov/ - Martin Fenner
Article-level metrics will accelerate the death of the journal. - Martin Fenner
The role of the librarian will change. More about information, less about journal subscriptions. - Martin Fenner
Is an article the smallest unit of scientific output? - Martin Fenner
Example of a publishing tool that uses NLM-DTD: Lemon XML http://network.nature.com/people... - Martin Fenner
Current cost of article semantic markup: $10000. - Martin Fenner
Problems with permanent record for articles containing multimedia. - Martin Fenner
Extended discussion about problems with archiving of articles in digital form. - Martin Fenner
We will have to discard data, as the carbon footprint of the storage solutions will get too high. - Martin Fenner
Publishers might want to look at Business Week for ideas. Appears to be working as a model to move from print to online. Effective at engaging the community in the content. Research is the original peer review, it shouldn't be that hard to move to a blog and social media model. Why not have all science become collaborative? Just need to define the right rules that fit the community. - Leonard Kish
Medpedia just released functionality to collaborate on documents. Wonder where that's going? - Leonard Kish
Societies are really communities first. I would think they would have an easier transition to online media and extending their communities online. - Leonard Kish
Daniel Mietchen
Changing the journal impact factor | Mendeley Blog (following up on http://ff.im/1FY3f ) - http://www.mendeley.com/blog...
Changing the journal impact factor | Mendeley Blog (following up on http://ff.im/1FY3f )
Changing the journal impact factor | Mendeley Blog (following up on http://ff.im/1FY3f )
"The significance isn’t primarily about PLoS, it’s about the ability to finally measure impact at the article-level in real-time." - Daniel Mietchen from Bookmarklet
" 1. We can now measure the impact of an individual article in terms of readership. For example, how many downloads, average time spent reading, how often the article is shared. 2. The measurement is in real-time. We no longer have to wait two years or more before seeing how often an article is being cited to determine its worth. 3. We are leveling the playing field for all journals.... more... - Daniel Mietchen
That's all about readership so far (and thus to some extent skewed towards popular topics), not research quality. But still interesting and necessary developments. - Daniel Mietchen
@Daniel: True, there could be a bias towards popular topics. Hopefully, a bit of normalization will help to limit that feedback loop of "popularity breeds popularity." Transparency in the process, rather than a magical black box, should also help to understand any trends. - Jason Hoyt
For what PLoS ONE is up to in this area check out: http://conferences.aepic.it/index... - Peter Binfield
Thanks, Peter - that's a good summary, unfortunately buried in that conference that none of my reference search engines has indexed. Anyway, please take a look at a test implementation of single-item metrics at http://bit.ly/mDL58 which might be helpful when discussing rating standards across sites in relation to the need for transparency that Jason has alluded to. - Daniel Mietchen
Well, to be fair it only went live on that site last week so it may not have been indexed yet. Also, I will be blogging it sometime this week (with the ppt etc). I think transparency is going to be the number one thing needed to make any system work - users need to trust the data and trust their ability to compare metrics across publishers / journals / articles. - Peter Binfield
More on popularity: "Based on theoretical reasoning it has been suggested that the reliability of findings published in the scientific literature decreases with the popularity of a research field." Source: http://bit.ly/RU9RX . - Daniel Mietchen
Victor / Mendeley Team
We just launched an all-around redesign of the Mendeley website! http://twitpic.com/5mb87.
Victor We just launched an all-around redesign of the Mendeley website! http://twitpic.com/5mb87.
Hey, the "Organize, share, discover" tagline looks familiar ;) - Euan
Oh, hai. Im steelin som ur tagline? Kthxbai :-) I didn't realize until now - it's just a good tagline, I guess! - Victor / Mendeley Team
Now that it's been brought to your attention, what are you going to do about it? - Bill Hooker
Just to head that one off I think it's purely coincidental. See also science.tv & twine.com. And I still downloaded Mendeley ;) - Euan
Thank you, Euan - that's very gracious of you! Bill, our previous tagline was "Manage, share, discover", but we use the word "organize" within the software (e.g. for the File Organizer functionality), that's why we changed it. By the way, Last.fm's tagline used to be "Listen, share, discover" - which we also only found out after we had come up with ours! - Victor / Mendeley Team
Just to keep playing the same tune I've been on for the past several months - http://synthesis.williamgunn.org/2009... - it's all about the recommendation. That's the thing that makes you not just Y.A.S.N. that no one bothers joining because they don't want to do the work required. - Mr. Gunn
Saw the citeulike feature...gave it a try but didn't seem to work ... it pops up a citeulike window am I supposed to do something there? - Steve Koch
The site looks great! Also, I feel like I'm missing something, maybe you can help Mr.G. How does Mendeley help me find recommended things? I can't find anything that suggests articles for me, or tells me who my "neighbors" are...? - Steve Koch
@Steve Koch. Re the citeulike feature: You should see a button on that page to accept and then get taken to your profile page on citeulike where you tick a checkbox (at the bottom). - Kevin Emamy
Thanks Kevin -- I tried it again just now and it worked. FYI Mendeley people: it did not work yesterday--today it brought me to a different screen than yesterday. Not sure if you fixed something or if the behavior is erratic. - Steve Koch
Probably a bug w/ citeulike feature: After setting it up, I now get an error in Mendeley Desktop telling me that I have a connection problem (which I don't). I think it's having trouble dealing with the new citeulike entries. Here's the error window: http://openwetware.org/wiki... - Steve Koch
Also, when I click on the "details" button it crashes Mendeley Desktop...FYI, I have a knack for breaking things ... just ask Bill Flanagan over at OpenWetWare :) - Steve Koch
@Steve Koch. Yep, the recommendation component isn't too obvious at the moment is it. :) Per article recommendations, 1) More data, more data. Until we meet a certain threshold, recommendations may not be too helpful and people could lose faith that it works. 2) The aggregated statistics give some insight for recs, but obviously is too general at the discipline level, for the moment. We'll get much more granular into specific fields over time. - Jason Hoyt
Lisa Green
How many FF Life Scientists live in the San Francisco Bay Area? Please comment below if you do and indicate whether you would be interested in meeting up in physical space.
I do and would enjoy meeting up. - Jason Stajich
There for a sabbatical from Jan. I'm an organic chemist, so a "life scientist". - Matthew Todd
Here, hear! For the foreseeable future. - Shirley Wu
I'm more of lurker in your group. But I'm in the Bay Area frequnently - Zaki Manian
Technically i live in the Bay Area, but I live in the Mission. bwahhaha. - Bosco Ho
well we might choose a location in city if that makes you happy Bosco, I know it is hard to get you types over the bridge... - Jason Stajich
again, that odd feeling that I'm living in the wrong place :-) http://friendfeed.com/the-lif... - Pierre Lindenbaum
So, how would we go about and make a Google Map mashup for the Life Scientists group? - Egon Willighagen
You might try connecting up with the SF life scientists at Nature Network http://network@nature.com - they would probably like to join in. Quite a bit of mapping has been going on among users there over the past months. (users log their location on their profiles) - Maxine
Marcos has already tried to build a map http://twitter.com/marcosc... , I suggested him to use freebase or (connotea+geotagged) to build this document. - Pierre Lindenbaum
Twitter has location as well, so you could query that info. - Mr. Gunn
ditto for the "FriendFeed Life Scientists" group on LinkedIn http://www.linkedin.com/groups... - Pierre Lindenbaum
Does it still count for those who have moved off of the wet bench and onto the "dry bench?" ;) Count me in for happy hour anytime. -Menlo Park - Jason Hoyt
Michael Nielsen
Does anyone know of any specific cases where the use of the semantic web has been instrumental in a scientific discovery?
I know many tools have been developed - I’ve been immersed in them for the past couple of days. But I don’t know of any cases (yet) where they’ve been crucial to a discovery, and am wondering if anyone knows of an example I'm missing? - Michael Nielsen
Don't really understand. Isn't that like asking what specific discoveries have come from using PubMed, Google Scholar, or any other search engine? Semantic web is just another search method. - Jason Hoyt
The semantic web is not "just another search method". Here's a relatively recent description, from one of the main drivers, Tim Berners-Lee: http://eprints.ecs.soton.ac.uk/12614... - Michael Nielsen
Michael, you should ping Eric Neumann on this. There might be private cases he could point to - Deepak Singh
Deepak - Will do. - Michael Nielsen
Semantic web is search, just a fully integrated representation of knowledge (per your citation). My point is that it still takes humans to really process that information and how to apply it to make research discoveries. Perhaps what you are referring to is more akin to AI, which IMO can make use of ontologies and semantic frameworks to make discoveries. - Jason Hoyt
Jason, that's incorrect. Semantic Web is NOT search. Ideally, it should be a way for machines to find other pieces of data and inform you about them so that you can take an action. E.g. has this experiement been done and what other experiments does it relate to. In search you go looking for that. In the idealized linked data scenario, you are told that this happens to be the case, and there should be no AI involved. - Deepak Singh
Hi Deepak! Semantic web definitely helps derive hypotheses as you suggest. Looks like I'm out gunned at least 2 to 1, and I don't think it's going to get any better. :) - Jason Hoyt
Michael wields a far bigger stick, so I only add to the noise :) - Deepak Singh
I agree it's more than search, but I understand where Jason's coming from. The SW is about the representation and interconnectedness of the data. It's still a mechanism, even if it's a fantastically important and fundamental one. I would look to the SW to make me aware of data/connections, which might help me reformulate a question, rather than discover something per se. Interesting question, Michael. - Matthew Todd
Have you tried adressing freebase.com or calais, you can ask them, they will reply to you for sure. But Who I am sure had been involved in tons of discoveries is the NIH website, I was thinking mainly in the genetic database. Ask arround those people, I recomend you to write to animesh ray, a great guy. - marianaº
Mariana - freebase exposes their data in several formats, including RDF, but so far as I know uses a custom built solution internally, i.e., they are not a semantic web company per se. Don't know about calais. Is the semantic web used for things like GenBank? I didn't have that impression, but could be completely wrong. - Michael Nielsen
Matthew - I've found much discussion in the semantic web literature of automated inference, hypothesis-forming and so on, but few real-world examples; that's where the question was coming from. - Michael Nielsen
Uniprot has an RDF backend. OpenCalais is a linked data repository if I recall correctly - Deepak Singh
Deepak - Yeah, following mariana's suggestion I've just been looking into OpenCalais, and it does look to be linked data. I'll have to look more closely at Uniprot. I'm looking for examples where you get something beyond what would be easily possible with a relational database, preferably significant discoveries that really took advantage of the technology. - Michael Nielsen
Another source for examples might be the pdf's here http://www.iscb.org/cms_add... - Deepak Singh
[Am resisting the temptation to suggest the Flying Spaghetti Monster link between global warming and pirates]. Michael, the data need to be more than linked, right? The data need to have been examined in a semantic way? i.e. searching for, and finding, something specific is not enough? The discovery needs to have arisen from an inherent vagueness in the question asked? - Matthew Todd
Matthew, but there is a difference between semantic interpretation and the semantic web/linked data. NLP, which falls in the former category is not the linked data web that Sir TB Lee talks about and one reason he started using the "linked data term". People got the two mixed up all the time - Deepak Singh
NLP = ? [sorry, chemist] - Matthew Todd
NLP = Natural Language Processing, that is a form of automatic literature mining - Lars Juhl Jensen
So if I look at a web page with a chemical structure, and hover the mouse over that structure, and this brings up the wikipedia entry to that molecule, that's semantic? Or is the semantic part the ability for me to ask 'where else has this molecule been used?' Or is it deeper semantic if I can learn how many more times this molecule has been mentioned on the web today vs yesterday - a question nobody has asked before? More linking = deeper semantic? - Matthew Todd
Possibly, since there is often some kind of triple store or similar technology driving that. In the "where else has this molecular been used" should be something that should automatically be flagged for you if the other place is represented appropriately. That last question about how many times can be solved in many ways that have nothing to do with the semantic web. I guess the best way to think about is that todays links connect documents. It's a graph of documents. Now imagine if you had a graph of data - Deepak Singh
TBL from http://en.wikipedia.org/wiki... "I have a dream for the Web [in which computers] become capable of analyzing all the data on the Web – the content, links, and transactions between people and computers. A ‘Semantic Web’, which should make this possible, has yet to emerge," So Michael you're looking for an example where the computer makes the discovery? Am being partly facetious. - Matthew Todd
Very interesting thread, thanks guys - Toby Graham
Matthew - What I want is something that would have been very tough to do with conventional relational databases. The kind of thing I have in mind is a simple but crucial automated inference that relied on disparate data sources being integrated. (Some very interesting old semi-automated examples, not using the semantic web: http://www.pubmedcentral.nih.gov/article... ) - Michael Nielsen
Matthew - In regard to your questions about what the semantic web is, I just mean the set of technologies developed from the foundation put forward by the W3C (RDF, OWL, etc). The crucial elements seem to be (a) representing data in a machine-readable but web-native format, unlike conventional databases, and (b) enabling disparate data sources to be linked or federated. Deepak's phrase ("graph of data") captures it nicely. The tools built on top of that core are what I'm referring to. - Michael Nielsen
Deepak - Working my way through those pdf's you mentioned. Interesting stuff, but no examples yet :-) - Michael Nielsen
Ah darn. I am out of ideas :) - Deepak Singh
Deepak - No worries. Thanks for your suggestions so far (and thanks to everyone else, as well!) - Michael Nielsen
Egon did a few nice demonstrators using the ONS solubility data (doing a second degree query using attributes from freebase that weren't in our original data model) but I can't say that those are massive discoveries as yet. Is it just that there isn't enough properly linked data? - Cameron Neylon
Cameron, I'd say the current available data in chemistry is too limited, and what is available now is too noisy... Michael, ask again in 6 months... - Egon Willighagen
Cameron and Egon - That's really interesting, especially to hear that things are moving reasonably quickly. I'll definitely be following with interest. - Michael Nielsen
Has anything come from the Neurocommons (http://sciencecommons.org/project...) project? Not sure if you've looked into that, but it might be worth pinging the guys who work on it. - Hilary
Hilary - I read their recent paper, and asked someone at Science Commons. There's certainly lots of interesting stuff, and it's promising, but as yet there seem to be no examples of the type I'm looking for. - Michael Nielsen
One of their flagship query is searching for CA1 Pyramidal Neuron related genes involved in signal transduction processes, at http://tinyurl.com/d4n9dt They have been using it as real use case demonstrating use of the system, see presentation at http://tinyurl.com/dlm4m9 - Melanie
Melanie - That example is interesting, but I don't think it was in any sense instrumental in making a discovery. It's more an example of how Neurocommons differs from existing search tools. (I didn't cross-check details, but it's very similar to an example used in their recent paper.) - Michael Nielsen
Michael - for me that is an example of query that would be hard (or at least painful!) to do with individual databases, showing that using tools like Neurocommons you can access information more efficiently. In this case, by providing a list of relevant candidate genes it helps scientist to target their investigations. Maybe I misunderstood what you were looking for, and hopefully other will be able to help :) - Melanie
Melanie - I'm looking for examples where someone used a SW tool in some crucial way to make a discovery. It's not difficult to point to examples where blogs, wikis, databases like GenBank etc were crucial in a significant discovery. But I don't know of an example where SW tools were crucial, in a way that took advantage of SW ideas. (Cases where they simply replace a relational database don't count.) - Michael Nielsen
Sat looking at the periodic table today, and thought that's a great example of a discovery arising from linked data - a pattern (periodicity) that emerged from experimental data (chemical reactivity) that needed to be looked at in the right way (Mendeleev playing patience). Obviously not the SW, however! - Matthew Todd
I think it's PMR who uses the periodic table as an example of Open Data -- since most of what Mendeleev pieced together was observations he hadn't made himself. - Bill Hooker
Egon and / or Cameron: What sorts of things do you think might be possible in six months time? - Michael Nielsen
Michael I don't know if this is exactly what you have in mind but in the next 6 months I think it is realistic for autonomous agents to be able to look for patterns in our solubility data and flag inconsistent measurements - it could be done by RDF - Jean-Claude Bradley
Other ways to read this feed:Feed readerFacebook