Alireza wins a 13th Titled Arena.

Another dominant victory.

GM Alireza Firouzja won his 13th Titled Arena on Saturday, bringing him almost even with the record 14 won by GM Magnus Carlsen. Firouzja was quickly among the leaders and eventually pulled away, finishing in 1st by a comfortable 20 point margin. Magnus played in the event too but started late, something that, as we’ve discussed before, is not necessarily a big disadvantage. His first game was played 42 minutes into the 2 hour event. By the time he surged into the top spots, the leader was just too far ahead and Magnus finished third behind Firouzja and GM Daniel Naroditsky.

The late-start strategy can work, but only if somebody can trip up the leader enough to keep them from rocketing into the stratosphere. Magnus did reasonably well to slow down Firouzja, he scored 2.5/4 in their games together, but the rest of the top players struggled mightily at that task. Naroditsky was 0-5 vs. Alireza, Fourth-place GM Vladislav Artemiev went 1/3, and Fifth-place GM Andrew Tang was 0/2.

There was plenty of star power further down the table with names like GM Shakhriyar Mamedyarov, GM Nihal Sarin, GM Praggnanandhaa, and GM Dmitry Andreikin finishing in the top 20. The event was also well attended by streamers with NM ChessNetwork and IM Eric Rosen among others taking part.

The next titled Arena will be on March 6th.

Alireza has been dominating this game and he only needs to find one crushing move to finish it off. Black to play and win.

Stockfish 13 NNUE on Lichess

The latest release of the strong open source engine is now available in your browser!

We congratulate the Stockfish developers for their latest release, Stockfish 13. Check out the release announcement for details.

Client-side analysis


Securely running programs like Stockfish in your browser requires them to be ported to JavaScript and/or WebAssembly (WASM). Previously this would disproportionately slow down NNUE. The slowdown has been fixed thanks to a recent contribution, so we can now provide Stockfish 13 NNUE for modern browsers.

Note that NNUE is most likely crunching fewer nodes per second than Stockfish with classical evaluation, but it is stronger nonetheless.

The implementation uses WASM SIMD, which allows efficiently applying the same CPU instruction to multiple pieces of data, speeding up the evaluation of the neural network.

When we last reported on the status of NNUE in WASM , the prototype was letting Emscripten choose the instructions based on the x86-targeted code, and failing to achieve good results. Now the proper WASM SIMD instructions are picked by hand using compiler intrinsics.


Computing a dot product with WASM SIMD intrinsics

Using NNUE will require a download of about 10 MB (20 MB uncompressed) for the neural network file. We will refrain from using NNUE, if your browser communicates that your device is in save data mode.

Using in Chromium, Chrome (except on iOS), and Edge

Requires version 88. WASM SIMD should work out of the box, because Lichess is participating in an Origin Trial for chrome://flags/#enable-webassembly-simd.


Firefox users will need more patience. If you’re feeling adventurous, you can enable  javascript.options.wasm_simd in about:config. We have tested this with Firefox 85, but note that Firefox does not guarantee stability or security when tinkering with these flags.

Other browsers

Other browsers do not yet support WASM SIMD and will fall back to other Stockfish builds. We do not plan to update the fallbacks, so the gap in strength will grow as Stockfish progresses. This applies also to classical evaluation in newer Stockfish versions, since NNUE-specific search optimizations can be detrimental to classical evaluation strength. Here is the definitive list of available builds on Lichess, best to worst:
markdownTag | Source | Stockfish version | Tech--- | --- | --- | ---NNUE | [hi-ogawa/Stockfish]( | 13 | Multi-threaded. Uses WASM SIMD. Strongest.HCE | [niklasf/stockfish.wasm]( | 12 | Multi-threaded WASM, but using the classical handcrafted evaluation function. A multi-variant build is also used for chess variants.WASM | [niklasf/stockfish.js]( | 10 | Slower-single threaded WASM fallback.ASMJS | [niklasf/stockfish.js]( | 10 | Extremely slow pure JavaScript fallback.

Server-side analysis


Unlike client side analysis, when you click “request a computer analysis”, analysis is handled by a user sharing their computing power using the fishnet client. A new version (fishnet 2.2.5) with Stockfish 13 (using NNUE, just like the previous version) was published on the day of release.


Number of fishnet contributors for each version. See how the rising yellow line with Stockfish 13 is overtaking all other versions as contributors are updating their clients.

 last edited: Thu, 18 Feb 2021 01:00:00 +0100  
Fat Fritz 2 is a rip-off

An important message from open source chess organizations.

A few days ago, ChessBase released Fat Fritz 2, described on their

, suggesting that he did in a few months what had taken other engine authors decades. Silver described himself as the engine author even though the engine itself was Leela without significant modification.

Fat Fritz

The following year, Silver released an updated version of the DeusX network under the name Fat Fritz, sold as a part of ChessBase’s Fritz package for €79.90. Once again, it used the Leela engine without functional changes (the changes made included modifying the name and author strings, and some default parameter values).

Fat Fritz was marketed as if it were an innovative engine, instead of being just a renamed Leela. As an example, the product description begins, “It’s a semi-secret development, an AlphaZero clone, engineered over the past nine months,” and doesn’t mention Leela. Probably the closest to what can be called an “attribution” is a brief mention in the middle of one of the articles, saying that Fat Fritz uses Leela “as a foundation.” In reality, Fat Fritz is Leela, but with a different net. Even this article begins by describing an “inspiring” talk given by a DeepMind employee to ChessBase programmers, supporting the false impression that ChessBase played a significant role in the development of the Fat Fritz code.

In ChessBase articles, the Fat Fritz “engine” was described in a way that implied it was stronger than Stockfish and Leela, but the evidence was questionable. Silver’s Stockfish comparison, for example, used an outdated version of Stockfish even though the development version was known to be considerably stronger. Similarly, when compared to Leela, the strongest configuration of Leela was not used.

If your idea of innovation in chess is charging 100 EUR for changing the parameters of an open source engine, you're going to have some problems competing with and
— Gian-Carlo Pascutto (@gcpascutto) February 9, 2021

Gian-Carlo Pascutto is an author of several strong Chess and Go engines and contributor to the Stockfish and Leela Chess Zero projects.

Fat Fritz 2

In 2020, Stockfish, Leela’s main competitor, started to support NNUE, fast neural networks that can run on a CPU. This feature improved Stockfish significantly, restoring its status as the strongest existing chess engine.

The Stockfish team had the same painful experience as the Leela team when Silver decided to jump on the hype train again, and released Fat Fritz 2, sold by ChessBase for €99.90. It is now Stockfish that has been copied instead of Leela, but the overall style is unchanged:
  • As with Leela and FF1, only minimal changes have been made to the Stockfish engine (again, the name of the software and the authors, and some default parameters). Even though the Stockfish engine is critical for playing strength, it is mentioned only briefly and the impact of the Fat Fritz 2 neural network over the one used by Stockfish is greatly overstated. The product description says FF2 is “learning from the surgical precision of Stockfish’s legendary search”, but it isn’t learning from Stockfish, it is Stockfish.
  • As before, Fat Fritz 2 is advertised as the strongest engine, but the only results presented are against an older version of Stockfish, and not the version used by FF2. Independent results show that current Stockfish versions on which FF2 is based are in fact stronger than FF2, suggesting that Silver’s net does not add playing strength.
  • ChessBase has published an interview with Silver describing the work. In the text accompanying the interview, they describe Silver as the “inventor” of Fat Fritz 2, and say that he started the project “almost completely from scratch.” In reality only minimal changes were made and Silver likely did not author them.
  • Silver describes FF2 as a “completely new” neural network, but it uses the Stockfish topology and differs from Stockfish’s network only in layer sizes. The interview article also says that Silver “came across a new neural network technology from Japan,” presumably because NNUE was originally implemented in Shogi engines. While it may sound as if Silver was responsible for bringing this innovation to chess, he did not implement NNUE and used mostly Stockfish tools to train the network.


It is sad to see claims of innovation where there has been none, and claims of improvement in an engine that is weaker than its open-source origins. It is also sad to see people appropriating the open-source work and effort of others and claiming it as their own.

Everyone is permitted and encouraged to modify and improve code from Stockfish/Leela while giving credit; that is the intent of open-source software. Everyone is allowed to copy Stockfish/Leela and sell them, provided the terms of the Stockfish/Leela license are met. But don’t pretend that the product being sold is something it isn’t.

Shamsiddin Wins Variant Titled Arena.

Not Chess, not exactly, but close.

Here at Lichess, we like to occasionally throw a bone to one of the strange subcultures that play versions of chess distinct from what we’re all used to. Even if you’re not a fan of their odd form of the game, even if their game may seem totally counter to way that chess is supposed to be played, it's nice to try something new once in a while. To that end, it was decided that

After the dust settled, 18 year-old GM Shamsiddin Vokhidov was the champion by a thin margin. Closely behind him were GM Raunak Sadwhani and the anonymous GM FeegLood. Anonymous GM Caching and GM David Paravyan rounded out the top 5.

Interestingly, the presence of increment did not slow down the player’s affinity for berserking much. In normal chess, without increment, berserking only gives your opponent 50% more time. Since berserking removes the increment entirely from the berserking player, the time advantage was significantly more in these games. The exact percentage of berserked games varied greatly among the top players. Sadwhani only berserked 6% of his games while for Paravyan it was an incredible 83%.

Congratulations to the winners, and a hearty thank you to all participants. The Next Titled Arena will be a more traditional time control, 1+0.

Puzzlers on the Storm

More puzzling, more faster

It gives us great pride to announce the arrival of Puzzle Storm; a timed puzzle feature that prompts the user to solve puzzles with increasing difficulty as quickly as possible. Give it a try here.

The setup is fairly simple, you are given puzzles to solve and 3 minutes on the clock. Every correctly solved puzzle is worth a point. However, the remaining time will not tick away in boring predictable chunks. It is possible to both add and subtract time from the clock with the combo bar, located to the right of the chessboard. Every correct move (move, not puzzle) will add one point to the combo bar. When the bar is full, you get a time bonus. The time bonuses increase based on total combo points. Any incorrect answer causes you to lose 10 seconds and resets the combo bar.


After you run out of time you will see a graphic showing your score and some other statistics. You can click “play again” or scroll down to see a listing of the puzzles attempted. Clicking on a puzzle will take you to its standard puzzle page.

As with any new feature, there is some possibility of bugs and we’d like to hear about them, as well as any more general feedback. Please leave a comment or let us know on any of our social media sites.

Frequently asked questions

Is it like Puzzle Rush on

Yes. It's also like Haste on, and, and the speed trainer on, and Tactics Frenzy on PlayMagnus.

For many years, chess sites have had timed tactics, and now Lichess does too.

Is it free to play it?

Yes. And we mean it. It's free for everyone forever, and it's unlimited. And there are no ads - as usual.

Is there a leaderboard?

No. At least not at the moment. Where there is a leaderboard, there is cheating. We don't think moderating this is a good use of our resources currently.

However you can see your best scores for each day, and the best score of each player is displayed on their profile page.

Will people memorize the puzzles?

We have more than a million puzzles, so it should be very rare to see a puzzle more than once. The Puzzle Storm set is renewed every 6 seconds.

Will there be more puzzle modes?

Maybe. We'll first see how this one goes, and listen to the feedback.

Some Puzzling Analysis

A quick look at how the new puzzle system is going!

The new puzzle system has been live for almost a month now and we wanted to take stock with a quick analysis, as well as introduce a new feature!

Puzzle Dashboard

On the new Puzzle Dashboard you can see your strengths and weaknesses, as well as a review of your recently played puzzles. Performance and solve percentages are available, as well as an option to replay failed puzzles. You can also dig into your specific strengths and weaknesses, and retry failed puzzles by theme!


Puzzle Dashboard

How it's going:

In total, we've now generated over a million puzzles, and we're not done yet - with more of our database left to mine for additional puzzles! If you want to use the database for any personal or commercial project it's all freely available here.


Histogram plot of the puzzle database by rating

There's a large spike at 1500 rating because we still have new puzzles trickling in that have not been played yet, and likewise the spikes to the left of centre are (probably) from puzzles that have not yet fully settled into their ratings after being played a small number of times. We think there's no equivalent pattern on the right because generally more solvers are in the >1500 category so those puzzles get played quicker but let us know in the forum if you have an alternative explanation!

Here are some of the most challenging themes by average puzzle rating:


And here are the lowest rated themes on average:


You might be unsurprised to learn that mate in 1 is also the most popular category to train exclusively, followed by opening puzzles, mate in 2 puzzles, and endgames - although all themes together make up only 5% of puzzle solves, with most people selecting the "healthy mix". Another caveat of course that it's much quicker to solve a mate in 1 so solvers can get through them more quickly!

Puzzles are also being well tagged, with 12x more upvotes on motifs than downvotes, so thank you for helping to build a robust puzzle set.

We hope you're enjoying the new puzzles as much as we are. Happy training!

Daniel Naroditsky wins the first Bullet TA of the year

...and for (surprisingly) only his second time!

After having won his first Titled Arena last November and getting on the podium 9 times before, GM Daniel Naroditsky (

The first Bullet Titled Arena of 2021 had a total of 8757 games played by 560 participants. This Arena was decided by streaks and not by berserking. There was 'only' a 6% berserk-rate. For the first half hour, leaders were switching quickly, with a lot of strong players being near the top. After an hour of play Naroditsky then managed to get some long streaks going and took a comfortable lead of over 15 points, which grew to almost 30 points at some points during the last hour. In the last 10 minutes he only managed to gain a further 3 more points, but with the big lead he got before, there was no chance for him to be overtaken, despite strong efforts from the other competitors.

As always there were a lot of streamers covering the Arena. IM Alex Astaneh (@AstanehChess) streamed the Warm-Up and the Titled Arena like almost any Titled Arena in the past. Also IM Eric Rosen (@EricRosen) streamed his point of view. We hope you all enjoy this format and you will be there for the next Titled Arena on 6th February, which will be Chess960!

Introducing Maia, a human-like neural network chess engine

A guest post from the Maia Team

We're happy to announce Maia, a human-like neural network chess engine that was 100% trained on Lichess games. Maia is an engine built in the style of Leela that learns from human games instead of self-play games, with the goal of making human-like moves instead of optimal moves. In a given position, Maia predicts the exact move a human will play up to 53% of the time, whereas versions of Leela and Stockfish match human moves around 43% and 38% of the time respectively. As a result, Maia is the most natural, human-like chess engine to date, and provides a model of human play we will use to build data-driven chess teaching tools.


We trained 9 versions of Maia, one for each rating milestone between 1100 and 1900, on over a total of 100 million Lichess games between human players. Each Maia captures human style in chess at its targeted level: Maia 1100 is most predictive of human play around the 1100 level, and Maia 1900 is most predictive of human play around the 1900 level. You can play Maia on Lichess: @maia1 is Maia 1100, @maia5 is Maia 1500, @maia9 is Maia 1900, and @MaiaMystery is where we experiment with new versions of Maia. If you play them, please tell us what you think on Twitter, in the Maia Bots group, or by email.

Note that each Maia plays at a level above the rating range it was trained on, for an interesting reason: Maia’s goal is to make the *average* move that players at its target level would make. Playing Maia 1100, for example, is more like playing a committee of 1100-rated players than playing a single 1100-rated player—they tend to average out their idiosyncratic mistakes (but Maia still makes a lot of human-like blunders!).


Because we trained 9 different versions of Maia, each at a targeted skill level, we can begin to algorithmically capture what kinds of mistakes players at specific skill levels make – and when people stop making them. In the example below, the Maias predict that people stop playing the tempting but wrong move b6 (the move played in the game) at around 1500.


Even when a human makes a terrible blunder – hanging a queen, for example – Maia predicts the exact mistake made more than 25% of the time. This means that Maia could look at your games and tell you which blunders were predictable, and which were more random mistakes. Guidance like this could be valuable for players trying to improve: If you repeatedly make a predictable kind of mistake, you know what you need to work on to hit the next level.


This is an ongoing research project using chess as a model system for understanding how to design machine learning models for human-AI interaction. We plan to release beta versions of learning tools, teaching aids, and experiments based on Maia (analyses of your games, personalized puzzles, Turing tests, etc.). If you want to be the first to know, you can sign up for our email list here. For more details about the Maia project, head over to the Maia website, read our published research paper, or see the Microsoft Research blog post about the work.

Thank you to Lichess for making this project possible with the Lichess Game Database!

Return of the GMs


Grandmaster Oleksandr Bortnyk won the first Titled Arena of the new year on Saturday, and his second Titled Arena ever after winning one last August. Like his last victory, this event proved to be an extremely close affair with only 5 points separating 2nd and 3rd place from the leader. The 15+ point gap between the rest of the top 10 and first place might suggest that the top 3 was stable throughout the event. In reality, it was highly contested, with many players trying to stake their claim.

Last month’s Titled Arena victor, IM Minh Le led for most of the first half hour of the tournament. However, Minh faced fierce opposition from Grandmaster Andrew Tang. Tang, who placed 32nd, proved to be a brake to Minh’s strong start by flagging Minh after Minh hung a rook in their first encounter. He then beat Minh in another game after sacking an exchange in what proved to be a nice attacking game.

Despite these setbacks, Minh was able to recover and maintain his position at the top of the standings. However, after being paired with Tang a third time, Minh was outplayed on the black side of a London. This ultimately ended Minh’s grip on the top of the leaderboard, he finished 5th.

After Minh fell back, the arena was led by the anonymous GM Arka50 and Bortnyk. However, keeping up with the theme of the last 2 titled arenas which were won by Minh, a pair of Non-GM players stayed determined in their quest to have a Lichess Titled Arena won by one of their own. Soon, the tournament leader was IM Mahammad Muradli, a 17-year-old IM from Azerbaijan. He had a strong showing in the second hour, and began his claim to the throne with a nice endgame win against GM Arka50.

Despite his initial success, Arka50 and Bortnyk stayed on his tail, and the 3 frequently traded places with one another. As time ticked down, and the arena found itself at the halfway point, Bortnyk started building a lead with wins against his podium rivals. He also beat GM Daniel Naroditsky and Super GM Vladimir Fedoseev, taking advantage of a tactical blunder in a nice game versus the latter.


In the arena’s last hour, the podium appeared to be relatively stable. However, almost a half hour into the last third of the arena, a new threat was beginning to emerge. GM Brandon Jacobson, after a relatively un-noteworthy start, shot up to the top 10 after amassing an impressive 9 game win streak while berserking all of his games.  


Brandon wasn’t able to maintain this momentum, and his streak was ended by none other than Bortnyk. Nonetheless, the streak was impressive. Jacobson berserked all of his games in the tournament and was still able to pick up wins against renowned GMs like Naroditsky and GM Ivan Cheparinov. Jacobson’s win against Cheparinov was particularly noteworthy as he was able to demonstrate the power of the 2 bishops in the endgame despite starting the game with half of the time of his opponent.

Jacobson’s impressive streak didn’t leave the tournament standings unscathed as he was able to beat IM Muradli, knocking him out of the top 3. This allowed the mysterious GM FeegLood to seize the chance to jump into the top 3. While it was unfortunate to see IM Mahammad drop out of the podium after an impressive showing for the first 2 hours, it just goes to show how important stamina and the necessity to finish a tournament strong are when it comes to arenas. Even though Mahammad was able to maintain a top 3 place for close to 2 hours, he finished 8th.

A similar story almost occurred with Bortnyk. He lost 2 games in a row to Arka50 and Jacobson but was then paired again with Arka with less than 20 minutes to go. Losing this game could have cost Bortnyk his highly contested lead. However, Grandmaster Bortnyk maintained his cool, and beat Arka in cruising fashion after using tactical means to take advantage of his opponent’s weak isolated c-pawns.

Thanks to all the players and viewers for their continued participation and support. Without the support of the online community, these events wouldn’t be possible. Congratulations to GM Olexander Bortnyk once again and the rest of the winners.

Finally, a special shout-out to the players and viewers who elected to stream this arena. With the chess streaming community becoming extremely popular in the past few months, it’s heartwarming to see streamers, some new and some old, being able to provide a player’s perspective or commentary to an audience of hundreds to thousands.


The next Titled Arena will be held on the 16. January. You can find the Titled Arenas for the coming months in one of our latest blog posts.

Lichess End of Year Update 2020

Keep on keeping on.

It's safe to say that everyone will remember 2020 for the foreseeable future, and mostly not for pleasant reasons. The global pandemic has affected us all significantly, and we are certainly wishing everyone a much better 2021! Still, at the end of the year it's important to take stock and be grateful for any good points we can find. For example, Lichess celebrated its 10th birthday this year!

Perhaps unsurprisingly, with a large portion of the world in lock-down online chess has seen an incredible increase in players, and it has been no different for Lichess. In fact, back in March and April we were running to keep up with demand as the player base doubled over the space of weeks, and we scrambled to streamline the code and get additional servers. We're happy to say that we're now handling the massive traffic easily, with peaks of over 110,000 players online!


Lichess online player count 2020

New Features

We've had a bunch of great new features from 2020, and here are some of our highlights:
  • Swiss tournaments - available for teams, and for a long time one of the most requested features! We used to have them and now they are back!
  • New puzzles and puzzle system, you can read more about it here and stay tuned for further updates...
  • Stockfish 12 is now in use for all server analysis with NNUE, as well as available for browser analysis (classical Stockfish), details here.
  • New integrations for third party clients and DGT boards.
There was also a focus on helping teachers, clubs and event organizers move online:
  • Flexibility for tournament organizers: Increased limits, editable tournaments, better pairings in small arenas, TRF exports, arbitrary starting positions for tournaments and simuls.
  • Revamped team management, multiple team leaders, team tournaments, team battles, messaging all team members.
  • APIs for event organizers to make challenges, manage events, start clocks, watch games, and broadcast games.
  • Higher initial Glicko2 deviations, so beginners have to take fewer losses to get an appropriate rating. Removed chess CAPTCHAs from critical paths (like signup), to make the site more beginner friendly.
  • Upgraded the private message inbox to a real-time chat.

Large events

Along with the remarkable growth of this year, many high profile events that otherwise might have been played OTB have shifted to online play, with Lichess being the host to some of these. We're very proud to be able to support such prestigious tournaments!
  • St Louis Rapid and Blitz with Carlsen, Kasparov, Nakamura, Firouzja and more...
  • Play For Russia won by Alexander Grischuk in a field of top Russian GMs.
  • Katara International won by Magnus Carlsen over top bullet talents such as Alireza Firouzja, Daniel Naroditsky and Andrew Tang.
  • Levitov Christmas Cup won by Alireza Firouzja over Alexander Grischuk 6-4 in the final and against a strong field including Levon Aronian, Alexander Morozevich and Vladimir Kramnik.
As well as these, we had some record breaking events by player numbers!

News and Journalism

We also branched out a little into more varied blog posts, here are some of our favourite posts from the year:

Technical Updates

Surviving the massive inflow of players required a lot of moderator, sysadmin and developer work. The technical changes roughly belong to three categories.

First, buying time: We quickly ordered better servers, but judging by the increased delivery times, we were probably not the only ones who needed to upgrade their hardware. Another change in this category was slightly randomizing the start time of tournaments, to prevent huge traffic spikes exactly at every full hour.

Second, real architectural and algorithmic improvements. Three major bottlenecks were the WebSocket frontend server, of which there can now be multiple (the client randomly chooses one and sticks to it), tournament pairings, where database accesses and the matching algorithm were optimized, and the online friends box, which is now using a custom in-memory datastructure. Pairing huge Swiss tournaments is only possible using a custom algorithm that modifies Burstein for efficiency.

Third, working around seemingly trivial limitations. The number of games in the search index now overflows signed 32 bit integers, and elasticsearch cannot have more than 2147483519 documents in a single shard, requiring re-indexing with more shards. The number of connections between nginx frontend servers and the backend is limited by the number of ephemeral ports on the frontend server, a 16 bit integer. This can be worked around by using additional network interfaces (initially we used the available physical interfaces, and recently figured out how to do it with virtual ones).

Besides the urgent issues, we improved our setup by:
  • Adding a security policy and asking people to review the site's security. Found and fixed issues include a couple of opportunities for hardening, some missed cases of noopener, and a strength indicator when selecting passwords. We also blogged about account security.
  • Modernizing the client side. Almost all JavaScript is now neatly organized into components and rewritten in TypeScript. We bumped the target to ES2016 and switched from gulp with browserify to rollup.js, which was able to produce about 5% smaller bundles. The bundles no longer use jQuery. Some plugins were rewritten as small TypeScript modules. In other places, jQuery was replaced by the much smaller cash.js.
  • On the server side, stripping down Play framework, switching from Scalaz to Cats, and optimizing the way we serialize asynchronous work.
  • Making more pages translatable on Crowdin. Only few exceptions remain.
We also started showing the exact deployed version, and are now posting a changelog after each site update, a more condensed and less technical summary of the full commit log.

After all this, we were very well prepared for the additional growth later in the year, and back to working on other improvements and features.


Thanks to all contributors, patrons and players! See you on Lichess in 2021, and maybe at a meetup, should that become possible again!


New Puzzles are here!

The wait is over.

It is with great pride that Lichess is finally able to announce our new puzzle system. Give it a try here.  

The New Puzzle Set

The new puzzle system uses an entirely new puzzle set generated from games played on the site. It was made with Stockfish 12 NNUE using 40 meganodes per position (roughly equivalent to more than 80 million nodes with classical evaluation, or 16 times the node limit used for the previous puzzle set, not even accounting for considerable improvements in the new Stockfish version). We took to heart the feedback about the previous Lichess puzzles, and made sure the new ones have a single, crystal-clear solution.

So far we have generated 650k puzzles, aiming for 1 million in the next three months. As with almost everything Lichess does, the puzzle set is released for the common good. It can be downloaded and used freely in any software you like. The hardest part of any puzzle-centric chess software is making the puzzles. It's difficult to get right, and even if you do, it uses an incredible amount of processing power and takes ages to produce puzzles in large quantities. Any developer that would like to tinker in the world of puzzles will no longer have to start from square one.


A totally new feature that didn’t exist on Lichess before, puzzles now have themes. Each puzzle is “tagged” as being relevant to a large list of categories.


See the themes for yourself.

Do you think your rook endgames are poor? You can now focus on puzzles that will only feature rook endgames. Do you often get blown out of the water in the opening? Do you miss forks? You can now tailor your training to your own personal requirements. We even have a special section called “equality” that asks you to find the narrow path to an even position. These puzzles will only appear when specifically requested.


After finishing each puzzle, you will be prompted to vote on two subjects. A simple thumbs-up or thumbs-down on the puzzle itself, and then on which themes apply. You are well within your rights to click on “Jump to next puzzle immediately,” but we hope you won’t. This feedback only takes a moment and will be invaluable to make the puzzle system the best it can be. Regardless of your level, you are qualified to make that call: it’s about whether or not you enjoyed the puzzle. The vote is purely subjective, it brings the human touch to the puzzle set. The one small favor we ask is that you don’t thumbs-down puzzles that are too easy or too hard. Calibrating difficulty is the job of the rating system.

For the theme voting; it's simple enough, would a person looking for fork puzzles like to see this puzzle? Then thumbs-up “forks.” If you don’t see a theme on the list of options you can select it manually from the “add another theme” drop down menu. Some themes can be detected programmatically, and these themes will be shown, but not available for voting. If you see a theme tagged that you think is wrong, you can downvote it. If you’re not sure about the themes, you can check out this study.

The new puzzle system may take some time to stabilize, and we appreciate your patience. First of all, there will always be bugs in new software, and you can be sure we will be scrambling to fix them ASAP. Another problem with so many brand-new puzzles is that they may not be properly rated.

The puzzles system on Lichess has always used exactly the same rating system as games, Glicko2. Every puzzle attempt is a “game” between you and the puzzle. If you get it right you win, and take your ratings points as the prize. If you get it wrong, the puzzle walks away victorious. (Note that you will get less rating points from themed puzzles as the theme gives a significant hint.) Puzzles that are brand-new will have volatile ratings, and it may take some time for them to find their level. Fortunately, with the size of Lichess’s user-base, this time should be measured in days instead of weeks. Three million puzzles are attempted on Lichess every day.

So, tell us what you think in the comments! This website exists only to serve its users and we always value your opinion. We'd also like to thank for contributing a puzzle rating predictor which we may use in the future to assign better initial ratings to new puzzles.

Titled Arena Announcement(s)

Announcing our schedule for upcoming Titled Arenas!

We're pleased to announce a schedule for our upcoming Titled Arenas!

All events will be preceded by a warm-up arena open to all players with a minimum of 20 rated games in the relevant time control and variant.

Prizes unless otherwise stated:

1. $ 500, 2. $ 250, 3. $ 125, 4. $ 75, 5. $ 50.

Participation requirement: Verified FIDE or NM title (see below)

markdown| Event | Date | Link | Warm-up ||-----------------------|----------------|----------|---------|| January 2021 Blitz TA | 2nd Jan 2021 | Event | Warm-up || January 2021 TA | 16th Jan 2021 | Event | Warm-up || February 2021 960 TA | 6th Feb 2021 | Event | Warm-up || February 2021 TA | 20th Feb 2021 | Event | Warm-up || March 2021 Blitz TA | 6th Mar 2021 | Event | Warm-up || March 2021 TA | 20th Mar 2021 | Event | Warm-up || April 2021 Blitz TA | 3rd Apr 2021 | Event | Warm-up || April 2021 TA | 17th Apr 2021 | Event | Warm-up |

Practical Information

If you are new to Lichess, it's important to become familiar with the arena tournament format. Read our FAQ and consider trying out an arena tournament in advance. Arena points are awarded based on the number of games you win. If multiple players finish the tournament with the same number of points, tournament performance is used to break the tie. Prizes will be awarded within three days after the event, through PayPal or BTC.

Title Verification

To participate in the Titled Arena events, you need a verified titled account on Lichess. If you don't already have a Lichess account, create one. Then, to get your FIDE or NM title verified, please fill out this title verification form, and we will process it within 24 hours. If you already have verified your title on Lichess, you don't have to do this again. When your title has been verified by us, you will be able to join the tournaments.


We've had a bunch of players streaming the previous Titled Arenas, including Magnus Carlsen, John Bartholomew, Eric Rosen and ChessNetwork. We encourage both participants and fans to live-stream the tournament. If you plan to, check out our small streamer's kit for some useful graphics to include in your overlay.

2020 Crazyhouse World Championship Grand Final

A titanic confrontation.

The Grand Final of the 2020 Crazyhouse World Championship organised by JannLee with a $2000 prize fund is upon us! It takes place over 3 days: Tuesday 22nd, Monday 28th & Wednesday 30th December at 20:00UTC. The Challenger Jasugi99, NM Janak Awatramani from Canada, was the runner-up to JannLee in 2017 under his old handle TwelveTeen. In 2020, he emerged from a field of 142 participants, coming top in a round-robin of 12 Candidates with an incredible win percentage of 85%. The reigning World Champion is IM opperwezen, Vincent Rothuis from Holland, who beat JannLee in the 2018 Final and again to win the bullet zh world championship in 2019. Jasugi99 & opperwezen will play 60 games of 3+2 crazyhouse (so 20 games and about 2 hours each day). All three days will be live-streamed by JannLee on Twitch, co-commentating with Mugwort, Kleerkast and the great man himself, crazyhouse aficionado GM Yasser Seirawan!

For more information on the recent happenings in the crazyhouse scene, you can read the longer version of this article here.

Grandmasters are overrated

Revenge of the under-titled.

On Saturday IM Minh Le won his second straight Titled Arena. He added a bullet Titled arena victory to his Blitz Titled Arena victory from 2 week ago. This was his third Titled Arena victory overall among many other finishes in the top spots. There is a long tradition of IMs performing well in Titled Arenas. Names like Opperwezen and MeneerMandje come to mind, players who have never looked even slightly out of place among GMs. It’s only natural since what you need to do to get a GM title and what you need to do to get a 2800 Lichess bullet rating do not align precisely, even if many of the same people do both.

Minh battled GM Daniel Naroditsky, who finished in second, right to the bitter end, trading the lead numerous times. Naroditsky started the event fashionably late as he often does, and it's hard to argue that this has ever hurt his chances much. He began playing the 2 hour event 20 minutes after it started and 30 mins afterwards he was in the top 10. He was, at one point, in the lead and 20 points ahead of the entire field. This is the sort of luxury afforded to those that can score close to 100% even against other Grandmasters. It also bears mentioning how Naroditsky's start time compares to the eventual winner's, Minh started playing 2 minutes after Naroditsky.

Third place was the anonymous GM Feeglood, fourth was GM Dmitry Andreikin and fifth was another anonymous player; GM Arka50. The next Lichess Titled Arena will be on January 2nd.

Stockfish 12 on Lichess

Shorter waits for stronger analysis

Three months after its groundbreaking release, Stockfish 12 (or rather the current development version of Stockfish 13) is finally available on Lichess, for both server and client side analysis.

If you’re not interested in all the technical details, here’s a short summary:
  • Stockfish 12 NNUE is more than 100 Elo stronger than the previously used version of Stockfish 11.
  • Stockfish 12 (classical, not NNUE) is now available for local analysis.
  • Analysis tabs once again remember if you turned the engine on or off.
A new version of fishnet, Lichess's distributed analysis software, is available. Updates include:
  • Optimized "queuing" of games for analysis, meaning shorter waits.
  • Picking the best build of Stockfish for the contributor's hardware from the additional available targets.
  • Consistent analysis quality.
  • Rewrite in Rust (from Python).

Stockfish 12

Among other improvements, Stockfish 12 brings a major new feature, NNUE (or ƎUИИ, for Efficiently Updatable Neural Network).  NNUE optionally replaces the handcrafted evaluation function with a neural network.

The main innovation is the ability to incrementally update the evaluation after moves, instead of evaluating the entire neural network from scratch. Stockfish remains a CPU based engine, and provides various build targets to progressively take advantage of modern vector instructions.

The following graph shows the strength improvements over the last Stockfish releases. SF_classical is the final development version before SF_NNUE landed. After the jump to SF_NNUE, some gains are specific to NNUE, others apply also to Stockfish with NNUE turned off.


Congratulations to all Stockfish contributors!

fishnet v2, rewritten in Rust

Server side analysis on Lichess is brought to you by fellow users who volunteer their CPU time using the fishnet client. fishnet started as a short and simple Python script that accumulated a lot of features over time.

To pay back technical debt, it has now been rewritten in Rust. Rust’s type and ownership system is amazing at catching issues at compile time, and it was refreshing to be able to confidently do major refactorings on concurrent code.

fishnet is using tokio for asynchronous communication with the Lichess API, a local queue, the engine processes, and the user who can stop the client at any time. Each of these are asynchronous tasks that talk over channels.

Being a systems programming language, Rust also has convenient access to detect CPU features and pick the best corresponding Stockfish build, rather than this terrible hack in Python, which uses ctypes to allocate executable memory and runs hardcoded machine instructions. This is important, because Stockfish 12 benefits immensely from the availability of vector instructions and brings a much wider array of possible target features.

Other improvements include:
  • fishnet now available for aarch64, with Stockfish built using emulation on GitHub actions.
  • Will now use multi-variant Stockfish only for variant games or games with exotic material combinations, for a ~10% efficiency improvement.
  • The decision to value consistency and debuggability more than utmost efficiency. Your server-side analysis will now have consistent quality, no matter which client provided the analysis. This means using the smallest common denominator, which is 1 thread and small hashtables. It also means using NNUE even on old hardware. The next section explains how fishnet v2 delivers stronger analysis and does so more quickly, regardless.
  • Node targets are tuned to take the same time as Stockfish 11 on middle-aged x86-64-sse41-popcnt CPUs. (In this sense, 4,000,000 classical nodes appear to be equivalent to 2,250,000 NNUE nodes.) This calibration means you are actually getting a strength improvement out of this update, rather than just converting all gains into efficiency. Older CPUs (the minority) will take longer to reach the node target. Newer CPUs (the majority) are able to reach the node target more quickly.
Thanks to all fishnet contributors! If you are still running v1, you will hear from us soon, nudging you to upgrade. You can find instructions to upgrade or how to start contributing in the README.

fishnet queuing

Server-side analysis requests on Lichess are appended to one of two queues: The “user queue”, for analysis requested by humans and the system queue for automated analysis (for example, as a first step to identify suspicious games for a closer look).

The user queue needs to be processed very quickly, because the user might be actively waiting for the result. The system queue is allowed to build some backlog at peak time, but should be cleared during the day.

Each fishnet client uses the queue status and an estimate of its own performance to see where it would be most useful. Consider a slow fishnet client with 1 core. If the user queue is normal, it will simply do system analysis and let faster clients handle the user queue. It will pick up work from the user queue only if it estimates that it can finish a game before a faster client has a chance to do so.

For the time being, fishnet batches always consist of complete games (with openings coming from stored cloud evaluations). So each analysed game can be attributed to a particular client.


Most games have more positions than the typical number of CPU cores, so some queuing is happening locally as well. For simplicity, the old Python client was putting available cores into groups, and then running mostly independent workers for each group. The diagram below shows an idealized scenario with an 8 core fishnet client analysing 2 short games of 11 and 20 positions. (In reality, not all positions will take equal time.) fishnet v2 lets each core individually pull from a local queue, which in turn pulls from the Lichess API, if empty. This change means that all analysis will finish much faster on average, without taking longer in the worst case.


You may notice that the last statement is not entirely true (especially with the switch to single-threaded analysis). A short game (like the orange one in the following diagram) can take slightly longer, if only a few positions remain for analysis in the final time step. This effect is negligible on longer games, and short games are … well … short anyway.


Stockfish 12 in WebAssembly

Lichess maintains a fork of Stockfish with WebAssembly support, but Stockfish 12 NNUE is not quite ready for the web.

A small obstacle is that the compressed neural network file is about 10MB. For comparison, all of Lichess’s compressed client side assets of all pages, JavaScript, CSS, all board themes, all 2d and 3d piece themes, and all sound sets, sum to 35MB, a figure that we try hard to keep low.

Time will show how much the Elo gap between NNUE and Classical eval increases, and what appears to be the best tradeoff for client side analysis.

More importantly, the WASM build of Stockfish 12 NNUE appears to be inexplicably slow, even considering the absence of vector instructions in WebAssembly. In some sense this might be good news. Maybe the slowness is due to a bug or oversight that can be easily fixed once found, rather than a more fundamental issue. Experimental WASM SIMD gives a speedup, but it is insufficient, given the bad base performance. If you know your way around C++, emscripten and WebAssembly, tackling this issue would be an amazing contribution. The last attempt can be found in this pull request, which is simply stockfish.wasm with a hack to make the EvalFile available in memory.


Nonetheless, Stockfish 12+ delivers considerable improvements even with NNUE turned off, so let’s withhold it no longer! We have updated the client side engine to the strongest available build. We have also taken this opportunity, to once again remember the Stockfish toggle state (at least per tab/session).

Invisible Pieces: Women in Chess


Women don’t play chess, and when they do, they play badly.

Chess is experiencing a renewed surge of interest thanks in part to The Queen’s Gambit, a Netflix show exploring a young American woman’s rise to world champion. It’s a great show, and it also manages to neatly sidestep any real sociological commentary on what it means for Beth Harmon to be female in this world. However, back in the real world, as a woman involved heavily in the chess scene, the gender politics are much more difficult to ignore.

It's hard to be specific about the number of active female players, but most estimates fall somewhere between 5-15%. Hou Yifan is the only woman in the FIDE top 100, coming in at number 86. Judit Polgar, the highest-ranked female player of all time, was number eight back in 2005. There's never been a female world champion.

This begs an awkward question - in a game that relies on mental agility and not physical strength, why is there such a significant gender gap in chess rankings? If women really are equal to men, why is that not reflected in those rankings?

Biological essentialists attempt an answer to this question by claiming that women are fundamentally unable to play chess to the same standard as men. There is something wrong with our brains. Bobby Fischer, one of the greatest chess players of all time, certainly felt this way.

about her terrible experience during the Pogchamps tournament, calling out the sexism and toxicity of chess culture. She stated that her looks were a topic of conversation in a way that was never an issue for the male participants, and she had a complaint filed against her on for being ‘crude’, despite the fact that her jokes were, if anything, less crude than the male players.

Alexandra Botez, one of the pre-eminent female chess streamers,

QTCinderella on sexism in chess

Levitov Chess: Christmas Cup

A holiday Chess Tournament with big prizes

Lichess is proud to announce a holiday-season prize tourney, open to all, with a prize fund of 1,000,000 Rubles. (approximately $16,000).

The Levitov Christmas Cup is a two-stage tournament. First there will be 5 Swiss qualifiers on December 9th, 12th, 16th, 19th, and 23rd. They will all be played at 19:00 Moscow Time/16:00 UTC. From those qualifiers 10 players will advance to the Knockout stage on December 24th and 25th. The first qualifier is already available here. The other 4 events will become available soon. They will be found on the Levitov Chess Team page[RU]. For an English-Language option, you could use The Lichess Curator team, which keeps track of all big events played on Lichess.

The players will be awarded points after each qualifier, the 10 players with the most points will advance to the Knockout stage.

1st place – 200

2nd place – 170

3rd place – 150

4th place – 135

5th place – 120

6th place – 105

7th place – 90

8th place – 75

9th place – 50

10th place – 35

There will be an official Russian Language Stream with GM commentary available at the Levitov Chess Youtube page.

International Master wins Titled Arena (again)!

Minh Le Tuan wins Titled Arena for their second time.

The December Titled Blitz Arena was one with a little surprise. It was not won by a Grandmaster as usual, but an International Master - IM Minh Le Tuan (@mutpro) from Vietnam. He was leading the Arena for most of the event and managed to finish with a 12 point lead over second place. This was the third time an IM has won the Titled Arena and the second time for IM Minh Le Tuan. The first event he won being back in September 2018. He is also now the fourth person to win a Titled Arena for a second time.

Second place was taken by GM David Paravyan (@drop_stone), who got to the podium for the first time after playing 14 events. Third place was taken by TA regular, GM Nihal Sarin (@nihalsarin2004). Following on from that, fourth place was anonymous GM @FeegLood with fifth place going to Armenian GM Zaven Andriasian (@Zaven_ChessMood).

This Arena was decided by berserk rate more than streaks. First and second place both had a berserk rate of 98%, while streaks rarely lasted for a long time. IM Minh Le Tuan took the lead after 1 hour played and did not lose it for the rest of the Arena, making it a very convincing win. For part of the Arena he was leading with 20 points! GM David Paravyan was in second place for most of the time and he also managed to beat IM Minh Le Tuan four times in their seven games. But in the end that didn‘t secure victory overall, as IM Minh Le Tuan managed to win six more games in a streak (and therefore were each worth two additional points). For the third place down the leaderboard the points were much closer and every win counted:

December 2020 Lichess Blitz Titled Arena Infographic
by Lichess on YouTube

The next Titled Arena will be in two weeks! You can watch it here. Stay tuned for the next Titled Arena post, where we will announce the dates for the next few months!

Our Favorite Open-Source Sites #2

Because sharing is caring

Open-source software dominates the internet, and internet chess is no different. It can't be exaggerated how much easier it is for a developer to build something when they don't have to start from scratch, when they can build on the work of others. With that in mind, Lichess wanted to give a helping hand to a few websites that follow our free and open-source software philosophy. Please give them a try!

Blitz Tactics

Opening Tree


Daniel Naroditsky wins his first Lichess Titled Arena


It is some kind of Miracle that before

The next Titled Arena is on December 5th. Follow all the standard Lichess channels of communication for details, including the brand new Lichess Instagram account.