How widespread is WestlawNext?

A student asked me this question.  Since I live and work in the beautiful bubble known as Stanford University,and have no idea how things work in the Real World, I turned to outside help to answer the student’s question.

I first asked our Westlaw representative, who provided this interesting and useful piece of information:

Based on a recent article about Thomson Reuters revenue, “The WestlawNext legal database has been sold to more than 18,500 customers since its launch in February 2010, representing 34 percent of Westlaw’s revenue base.”


But I knew that our students would want to know more specific information, so I sent out a quick request on the Northern California Association of Law Libraries (NOCALL) listserv.  I received 21 replies — 6 from Biglaw law firms, 8 from small/midsize firms, 2 from county law libraries, 4 from the courts (U.S. District, United States Court of Appeals and California Appellate), and 1 from a state agency.  Of the 6 Biglaw law firms, 4 have WestlawNext (although one, at present, is only making it available to firm librarians — see comments below) and 2 do not.

Of the 8 small/midsize firms, 5 have WestlawNext and 3 do not.

None of the public sector law libraries have WestlawNext.  The state agency reports that it might be added this summer.  I did find it a little ironic that the court libraries do not have WestlawNext — didn’t West get started by wooing the judiciary and treating judges extra special nice?

The comments I received were also very useful and I read many of them to my students, since they contain some great research tips and insights.

Here are a few of the comments:

I know that when firm librarians first saw the marketing materials, we were worried that the quality of search results would go down due to the one-box searching, but if anything the opposite has happened.  The result ranking is much better than it was previously, and you can see a lot more information before clicking into a document, which is great.

Our firm has a flat rate contract, so even though there is a cost for the original search ($50), the amount billed back to the client is significantly lower.  They shouldn’t be scared to use the resource due to the cost (at our firm anyway).  It’s in line with Lexis and the old version of Westlaw.  But of course, books are still cheaper.
Of course, they should still use good search practices so we’re not charging the client needlessly – searching broadly and then narrowing the focus, thinking before clicking into documents, checking before getting material from outside our pricing plan.  You can refer back to materials saved to a folder for a year, for free.  I’m saving a ton of material to folders.
The “price triggers” that incur costs: initial search, opening a document, clicking on the keycite materials. 
Our firm’s flat-rate contract doesn’t cover the PDF images of reporters – that’s the only place where you’re not warned before getting material outside of our contract.
We did a firm survey last year, and honestly, most of our attorneys start their research process on Google because it’s free.  Once they have useful information (like a case name or a statute or a law review article), they’ll go online and find all the related documents and secondary sources.  WestlawNext does a really good job of that, and the new format for KeyCite makes it easy to trace between material types. 
One more caveat: Keycite and Shepards both may say a case is good law when underlying statutes or cases have been invalidated (not always, but sometimes).  They don’t always catch it when a case has been invalidated by new legislation, as well.  Knowing how far to trust citator services is important.

Only librarians have been given permission to use WLN.  We will be offering mandatory class(es) on the product before attorneys are given passwords to access it.  We are aware that the law school students have been exposed to WLN & will likely expect to use it upon entering into the firm environment, so our window to get up-to-speed is fast approaching.Caveats:  Not everything has been loaded into WLN, so it could be frustrating to attorneys when prompted to transition in the middle of their research  to go to Westlaw. We’re also not sure if the costs will increase since clicking on any results keeps adding up the total.  I know we librarians have had conference call discussions about some of the quirky searching & results . . . .  Do I like it?  I had a trial ID & have not used it much since our contract went into effect in January.  I’m still “on the fence” about it, but realize it is the wave of the future in this Googlish society.
The federal courts do not have WestlawNext at this time, and my understanding is that while the Administrative Office in D.C. has discussed it with Thomson-Reuters, there is no plan to purchase it for the federal judiciary in the near future.
We are using it.  The attorneys really like it.  One thing I’ve learned about it is that you should never choose the hourly setting on WestlawNext.  Always use it in transactional mode since the nature of it promotes lots of browsing time.  Most law firms are turning off the hourly feature and forcing transactional mode, but if not it can wreak havoc with your flat-rate contract client allocation.
My advice for students:  Know how much the search costs are before you do it.  And always call the research attorneys — they know their tool better than any of us ever will.
We aren’t using it in the [state] Judicial Branch.  It’s way too expensive and we can’t afford it!  And if Westlaw itself becomes too expensive for us we may be forced to use just one service.  Since Lexis has the official reporting contract, we must have access to them.
We do not have WestlawNext.  We did a trial of it and it has potential, but we are not willing to pay extra for it.
I see other problems besides cost for WestlawNext in law firms.  To oversimplify: Google on new steroids represents WestlawNext’s research model. That model shows remarkable detachment from application to real-life research problems in law firms.  The stock examples used in WestlawNext’s demos fit TR’s marketing well enough, but I could not translate them into everyday, online research done in law firms. I also see evidence of algorithmic anomalies – possibly widespread – that have only begun to be explored.
We have been using WLN for the past year.  We hopped on the band wagon pretty early due to a demo seen here by our litigation partners.  The litigation attorneys like it a lot.  Power users of regular Westlaw have a big learning curve so do not like it quite as much.  It is great, however, for researching an area you may be unfamiliar with since it will give you the most relevant cases up front.  Our attys like this feature.  The attys also like the cost..they can figure out how much their research will cost them before going in since a search runs about $65
and then you can open as many docs as you want until you hit your research budget ($15/doc. or so).  It relieves some the pressure they feel when going in.  I think it is here to stay.  Even [after] I have cancelled Lexis access here, cut my print budget and staffing, the WLN contract was added without blinking an eye. . . .
We require everyone to be trained first on regular Westlaw. We will then train them on WestlawNext.  There a cost pitfalls with both.  Searching is cheaper and broader with WestlawNext, but if you want to look at lots of documents, you will run up the costs. Initial searching Westlaw is probably narrower (have to select a database), but then the documents don’t cost additional to view.
I would recommend that students avoid WestlawNext like the plague until they have a solid grasp on doing research on their own.  You do not want to be dependant on an algorithm created by a corporation to be able to do an essential part of your job.
I think Next can be a valuable tool and time-saver for attorneys who understand what the algorithm is doing and what the resources are it is returning in the results, but I worry if students start learning how to research using Next, they will not be able to do any research when they leave school unless they are using, and paying a steep price for, Next.
The two main reasons [we don't have it] is that Westlaw would require us to have a separate contract for WestlawNext (we see this as paying for Westlaw twice), and WestlawNext does not have all of Westlaw’s content. . . .
Though honestly we haven’t embraced it completely and probably won’t until West tells us they are pulling the plug on classic.  I think it is a good product.  I like the $60.00 search and the left-hand screen that guides you to your hits.  The biggest issue is the pricing per document.  Those clicks just add up.  I am planning on asking our summer assoc. class if they are using Classic or NEXT, then based on the response, the rep. will concentrate on one or the other for the orientation. It will be interesting to see where the product stands with this first summer class who have potentially been using it at school.
We at the California Appellate Courts are not.  We have Westlaw and Lexis . . . [and] should be rolling out LMO [Lexis for Microsoft Office] soon, but that is as fancy as we are getting.

18 thoughts on “How widespread is WestlawNext?

  1. Paul,

    Great information and feedback. I’m curious though. One of the comments mentioned “algorithmic anomalies.” Any idea what that person is talking about?

  2. We’ve seen odd results too. One of our homework questions is this: You represent a magazine publisher. In order for your client to receive special mailing rates, she must provide information to the government, including her circulation numbers. She doesn’t like this, and wants to know if the law is constitutional. What is the answer? Draft a short e-mail to your client advising her.

    We then show them how to get to “second class mail” and periodical, which are the better search terms.

    Then, in WestlawNext when we put the following search into the search box: “second class mail” periodical constitutionality

    We pull up the most relevant case 2nd in the list….

    However, Roe v. Wade shows up at the top of the result list.

    We asked our rep. and he’s looking into it.

    • We just got this answer from our rep.:

      So, I got back and answer on your question and, it turns out, they went ahead and fixed the problem. They didn’t explain specifically what the issue was, but they seem to have figured it out and are planning to use it as guidance for further refinements. Here is exactly what they said:

      “Despite the vast majority of results for this query being quite good, we did discover that one aspect of the algorithm was not operating exactly as intended. We have made some initial adjustments and the results are now improved and more in line with our customers’ expectations. Thank you for passing along this feedback. It is very helpful to us, and we will use this example in further testing and adjustments as we continue to refine and improve WestSearch.”

      I just re-ran the search and Roe no longer appears anywhere in the list of cases.

  3. Interesting. When you run the search in Classic Westlaw, where does the most relevant case appear, by way of comparison?

  4. A natural language search in Classic WL for returns that case at #10. But I think I figured out why it’s lower in the results: the word “Constitutional” in all its forms does not actually appear in the case.

    • Jane,

      That’s interesting. So while there may be anomalies, the seminal case it actually returned higher on the list.

      Paul, I don’t suppose you could follow up with list members that actually use Next whether they’ve had similar experiences, or maybe even the general experiences with search results?

      Thanks for the post.

  5. To answer the question about “algorithmic anomalies”: They are problems with the results caused by the nature of the “search engine,” which is based on an algorithm (mathematical formula). I can give you a specific example. Using Westlaw Next (WLN), I did research on invited error in federal CIVIL cases. I got lots of results related to invited error in federal CRIMINAL cases. As explained by West, the criminal results showed up in a greater number because invited error in criminal cases appears in several key numbers, while invited error in civil cases apparently appears in only one key number. The Westlaw Next algorithm emphasizes the content of key-number annotations more than “just”the words (the natural language approach) in the opinion and its annotations.

    When I conducted a natural language search with Westlaw Classic (WLC), I found my relevant cases very close to the top of the results, whereas I found virtually nothing with WLN.

    The healthiest way to approach WLN is as another tool. It’s very good when searching for a legal principle, but weaker when you’re trying to find something that’s more factually driven. I have been running comparison searches using the same language in both systems (WLN v. WLC natural language). Sometimes, WLN wins; sometimes, WLC wins.

    Furthermore, despite representations by West, a terms-and-connectors search conducted in WLN is NOT the same as a terms-and-connectors search conducted in WLC.

    In many cases, legal research is an exercise in PROVING A NEGATIVE–never something that can be accomplished with absolute certainty, of course. For example, has the 11th Circuit ever ruled on invited error in civil cases? If the answer is “no” (determined to the best of your ability and never guaranteed to be “no” despite your best efforts), then it’s okay to cite cases from other circuits or U.S. district courts in your brief. But you don’t want to cite cases from other circuits unless you’re “certain” the 11th Circuit has NOT spoken on the issue. This type of research is more difficult to accomplish with WLN. From my experience, this type of proving a negative is more accurately accomplished by doing a terms-and-connectors search in WLC.

    When you don’t know where to look for an answer to something, the WLN interface is good. When you do know where to look for an answer, the WLN interface tends to slow you down. The results are cluttered with too much irrelevant junk. In using WLC, you can narrow the search BEFORE you conduct the search; with WLN, you often have the “filter” the search AFTER it’s connected. For me, this often adds unneeded steps. A failure to filter the results is also likely to lead to irrelevant results being at the top of the results pile, instead of at the bottom.

    My bottom line assessment: WLN is another tool for legal research, but not the only tool that should be used.

    • John,

      Thanks for the comprehensive reply. This is interesting because I’ve received feedback from several librarians saying their attorneys use both systems (WLC and WLN). Different research needs, different results.

  6. Power users hate WestlawNext. Occasional users love it. I am having an issue with outside the contract usage. Documents included in are for some strange reason outside the contract in WestlawNext. I’ve asked customer service numerous times to check on this and have received no answers.

  7. WLN is out of the question for small public law libraries. In CA, one Law Lib said that 44 out of our 58 counties could be considered small libraries. Due to budgetary concerns on the ever rising cost of WLC, we switched to a pkg from Lexis that cost half as much and gave us the rest of the US.

    ATM, things are happening with other vendors with online subscriptions – things like consortium pricing for aggregating public law libraries access to other services at a price that is whithin reason in proportion to our budget and patron quantity. This is especially relevant due to CA budget cuts to public libraries (we work together sometimes to educate the public) and the Federal Cuts planned that will affect the Fed $$ that States get. Well, before I go too far off point, we are having to be conservative and we cannot afford the whole tool set, just the basic tools that a determined reader can use.

  8. These examples are great. Thanks Paul for getting this conversation rolling.

    The examples of how the algorithm sometimes improves and sometimes impedes accuracy are particularly useful. Also, THANK YOU John for validating my sense that “a terms-and-connectors search conducted in WLN is NOT the same as a terms-and-connectors search conducted in WLC.” I make the same assertion in my recent article about WestlawNext found at I, too, have spoken with people at West to clarify this point. Entering expanders, proximity connectors or document fields is supposed to trigger Boolean searching. I get mixed results. I am also told that you can use the advanced search template OR the “Advanced:” command in the global search box to trigger true Boolean searching. I continue to play with Boolean searching in WN and my results vary. BTW, I address my conversations with West about this and other issues in the revised version of my article (not the one currently posted on SSRN) which will be in the Summer edition of LLJ.

    I also appreciate hearing the ways that WN is working well for folks. I think it’s a powerful tool and it’s here to stay, so we’ve got to learn how to best use it effectively.

  9. Hello, I am an employee of West and one of the folks responsible for building WestlawNext, search features in particular. I think some of the confusion about Boolean search (“Terms & Connectors” in Westlaw-speak) and our new WestSearch engine is related to the fact that WestlawNext returns Boolean search results in relevance order, which is quite different from Also, WestlawNext search is based around jurisdiction, and searches across multiple types of content, so the content being searched is not necessarily the same as the some of the databases people are used to in

    First, about the result sort order. Using expanders like !, proximity connectors like /P, or fields like Title( ) does trigger a Boolean search when entered into the global search box in WestlawNext. Additionally, customers can use the Advanced Search template to obtain Boolean search results for any query, as all queries entered on the Advanced template are run as Terms & Connectors. The default sort order for Boolean search results in WestlawNext is by relevance, in keeping with our desire and mission to present the most relevant documents to our customers at the top of their result lists. on the other hand will display the result list for that same query in a different sort order, such as by date for Cases or by the Table of Contents order for Statutes (Title 1, section 1, then Title 1, section 2, etc.). While we believe that customers find their documents more quickly with Relevance order, WestlawNext does allow customers to re-sort their results if they want, for example by date, using the “Sort by” menu above the search results. Sort order defaults can also be changed via Preferences.

    Second, there may be differences in the content selected in WestlawNext, compared to a database chosen in For instance, a search in the CTA11 database on will only contain cases from the 11th Circuit Court of Appeals. Selecting the “11th Circuit” from the jurisdiction selector in WestlawNext includes not only 11th Circuit Court of Appeals, but also the U.S. Supreme Court and the federal district courts and bankruptcy courts in the 11th circuit. The “11th Circuit” jurisdiction is more appropriately compared to the FED11-ALL database in While you certainly can search just the 11th Circuit Court of Appeals in WestlawNext, for example you can quickly navigate to that specific page by typing in 11th or CTA11 into the search box, the differences in Boolean search results are about the content set being selected.

    I would also like to thank everyone who posted here for their comments and insights, as we take feedback on WestlawNext very seriously as we continue to improve the product.

    • Brian,

      Thanks for the response. I still find it surprising that Westlaw Classic never gave more weight to documents that have the search terms several times as opposed to ones that contained it only once, or at least gave the end user the opportunity to sort that way. KeyCite, by comparison, gives greater weight to a case that cites the KeyCited case more than once. But perhaps this is a difference in research needs. I’m sure you’ve done plenty of testing on weighted results with T&C to see if the search results were more “relevant.” I’d be curious to hear what the outcomes were like.

      • I believe that attempts to rank documents just by the frequency of query terms appearing in the document probably goes back to the history of Natural Language search engines. So first you try to rank by # of hits, and then you find that the common terms in your query impact the results in unpleasant ways. Next you try to somehow balance out common and uncommon query terms, and to answer the question of what is common, you next try to determine the frequency of terms in the larger corpus. Then someone says, hey, let’s also consider how close the terms are together in the document, because that might capture the right concept or discussion. Natural Language does all of that, and yet still has difficulty returning the most relevant documents for an issue at the top of the list. Finding and ranking the most relevant documents is a huge challenge, which is why it took us 5 years of work on WestSearch to get where we are today.

    • Brian,

      I understand that the history of Natural Language search would cause those sorts of problems, but I was actually just focusing on the narrow issue of frequency of a Boolean query, which is why I mentioned KeyCite as a backdrop. There is an assumption that a structured phrase which is repeated throughout an opinion is one that is being used/discussed, and one might further assume is more relevant to the searcher. This could also be false, which is why I asked about TRL’s experiences with frequency of Boolean queries, not unstructured arguments in Natural Language.

      Thanks for the reply.

      • I believe they perform worse than Natural Language, for the reasons stated. For single word or short phrase searches, maybe there isn’t much difference. It will depend upon what the researcher wants to find, and whether the more relevant documents contain those terms more frequently. If what you want is the most recent document mentioning a word, then term frequency doesn’t help. And back to the issue Mr. Hightower raised in an earlier comment, if you are trying to prove there are no documents containing the term, ranking may not matter.

Leave a Reply to Jason Wilson Cancel reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Connecting to %s