FAQ

Aren’t academic papers too hard to read?

Too hard for who?  Most (not all) academic papers are pretty specialised, which can make them hard for non-specialists to read.  But that doesn’t make them useless to the public.  To pick one obvious example: your doctor has the background to read medical research, but probably doesn’t have access.

And papers vary.  Bright high-school science students shouldn’t have too much trouble following the arguments of papers like Head and neck posture in sauropod dinosaurs inferred from extant animals, even if they don’t understand all the details and ignore the citations.

In the end, it’s for readers to decide whether or not a given paper  is “too hard” for them; it’s not for publishers to decide ahead of time, and use that as an excuse for not allowing access.

But the people who need access already have it.

This is an argument sometimes made by senior academics at well-funded universities with wide subscriptions.  It may be true that there is a tiny proportion of researchers who have all the access they need.  But there are multiple issues with this:

We are a long way from the fully open access to research that we need.

What about military research?

That kind of research isn’t published at all, so no-one is suggesting that is should be made freely available.

If military secrets were published in non-open journals, a $35 access fee would hardly deter the people who we don’t want reading them.  The only way to keep them safe is not to publish.

Why isn’t publicly funded research already free to the public?

It does seem crazy.  But this is a historical accident.

Before the Internet, the only way for papers to be read around the world was by having publishers print and distribute copies.  Since each copy cost the publisher money to make and deliver, they quite reasonably charged for each copy.

But now that we have the ability to make any number of copies and send them anywhere in the world instantaneously, each copy distributed is free.  Publishers still have costs, but these are incurred up front, in handling the manuscripts and getting them into their final form.

Some new publishers, such as the Public Library of Science and BioMed Central, charge accordingly.  Their costs are met by article processing charges (APCs), which are paid by authors from the grant money their projects receive.  (Fee waivers are offered for authors without grants.)  But most older publishers, too used to the old model, are resisting the change.  They continue to charge for each access — even though accesses are zero-cost (or so close to zero that they can’t be measured).

Although it’s unwise for publishers to cling to an obsolete business model, it’s not unfair: after all, it’s their choice to run their businesses how they choose.  What is unfair is for these charges to hinder access to research that was publicly funded in the first place.  (In practice, this is the vast majority of published research.)  This is why some public funding bodies (such as the NIH in America) have public access policies which grant recipients have to follow.

Won’t publishers starve?

No.  At least, not if they adapt to a business model that makes sense in an Internet-enabled world.

But, really, it’s publishers’ job to make sure they are providing a valuable service to the public; not the public’s job to prop up publishers that insist on an obsolete business model.  Those that can take the step into the 21st Century will do well.  But if others have to close down because they insist on erecting artificial barriers to access — well, we won’t shed too many tears.

Can I reuse material from this site?

Yes, absolutely!  Except where noted, the interviews on this site and all other text is by the @access working group.  All content is furnished under the Creative Commons Attribution 3.0 Unported License (CC BY 3.0).  This means that you are free to re-use it anywhere, in any way, so long as credit is given.  Exceptions are noted on the page where they occur.

We ask that you let us know when you re-use anything from this site; but that’s only a request (because we’re interested to see what people are doing).  You don’t need our permission.

Any other questions?

Please feel free to leave a comment below, asking any other question you may have.  (Honest questions, only, please, no polemics.  There are places for debate over academic publishing reform, but this is not one of them.)

6 responses

  1. While I am a strong supporter of open source and open access, one question is bothering me. Maybe you have a good answer for me. Isn’t a continuous income from selling access to papers an important motivation for publishers to keep a long-term archive?

    What if an Open Access publisher loses authors willing to submit their papers to their journals? They might also get into financial trouble in some other way. With no further options to make money, but continuing costs, the publisher might be forced to shut down their archive or charge a subscription fee. Then, some other organization would need to step in, providing access to the journals at no charge. This would probably be a publicly funded library or archive.

    Is there a good discussion of this potential problem online?

    1. Hi, Raphael, thanks for this important question.

      Oddly, no, I don’t know of any good discussions of this — but I can give you my own take on it.

      First, a good open-access publisher will factor ongoing costs into their up-front fee, so that they have cash in hand to keep archives running. That is one reason that PLoS charges a rather higher publication fee than some people expect.

      But maybe more important is that anyone can archive open-access works. They can be deposited in PubMed, so that the U. S. Government does the work (PLoS does this as well as maintaining its own archive), and huge sets of papers can be made available for bulk download and mirroring. (PLoS does this, too — big downloads are on BioTorrents, so they don’t even cost PLoS any download bandwidth.)

      The broader point is that when research is freely available, all the artificial barriers that make formal archives necessary in the first place come down. So come back to my favourite example of PLoS, not only do they have their own archives, plus all their papers in PubMed (and also in LOCKSS), plus those big freely available batch downloads, but everyone who cares about a given subject has all the relevant PLoS papers on their hard drives. In effect, we have a massive geographically distributed archive that replicates copies across thousands of nodes worldwide. And no-one even had to build it! That’s what happens all by itself when barriers are removed.

      1. I’d like to stress the value of libraries in this area. In the print domain, we’ve not expected publishers to keep archives over the long term, as libraries have performed that function. And in the same way as the ‘lots of copies’ model Mike refers to, the fact that many libraries have copies of print journals increases the likelihood that the community won’t lose access.

        However, it has long been recognised that this isn’t sufficient. In order to make absolutely sure that materials are preserved and curated, we have libraries that are mandated to do so (the ‘legal deposit’ or ‘copyright’ libraries in the UK). We also have national networks of research libraries collaborating to ensure that disposal of any material only takes place if there is assurance that they are being retained by other institutions, or collectively (such as the UK Research Reserve).

        In the digital world this is even more important, and this is where I feel we need more than an assumption that multiple copies guarantee long-term security. In the print world, it’s possible to be lucky, and find that an item left on a shelf is still there 100 years later. In many cases, of course, it isn’t. Or, if it is, it’s in such poor condition as to be unreadable. This is why legal deposit libraries hold material in acid free boxes, release it under specific conditions, and may even store it in reduced oxygen environments.

        In the digital world, if a file is left on a server for a 100 years, it’s hard to imagine that it will still be there when you (or, more likely, your descendants!) try to access it. Of course, we’re more careful than this; we back files up, we upgrade servers, we have disaster recovery systems, etc. But, technology changes rapidly, file formats become obsolete, so does software, operating systems and hardware. This is why there’s a large community of digital preservation advocates working to ensure that we care for our digital memory in the same way as we have looked after print. Organisations like the Digital Preservation Coalition and the Digital Curation Centre in the UK are at the centre of this work, as are our national libraries. New standards (such as the OAIS Reference Model and PREMIS), tools (Droid, Jhove) and software (e.g. Ex Libris Rosetta) are emerging, and feeding into digital library architectures to manage, preserve and guarantee access to digital collections.

        So libraries can do much more in this space than provide OA via the Green route through their institutional repositories, and are very much up for the challenge of being the digital custodians required in an open access world!

      2. Thanks, Simon. I do agree that libraries tend to be overlooked in Open Access discussions — which is ironic, given that librarians have been agitating for OA since long before most researchers realised what an important issue it is.

  2. When I talk about the concept of Open Data/Access which means that the data and information should be freely available to the public, without restriction or charge for its use, people generally comment that what I have contributed? and you dont have right to ask others who had generated data and information! Yes they are right and I feel that I am not the Right Person to advocate about Open Data/Access.

    1. Sridhar, I am not really clear on what point you’re making here.

Leave a comment