People use AI for a lot of things. This particular site is dedicated to hobbyist use of large language models for roleplaying and creative writing purposes–specifically the Mythomax language model.

Okay, there’s the explanation for newbies and normies done. Let’s get down to it.

Since Mythomax came out, it quickly swept out all other contenders for best RP model: despite being a perverse, incestuous, Frankensteinian merge of other models, it was ludicrously stable, wildly creative, and smarter than most of the previous generation’s leading models–despite being less than half their size. As far as I know, we haven’t had a consensus about the go-to RP model since Pygmalion. And the advantage of all this consensus is that most of us are playing under the same conditions again, opening up exciting possibilities. In an environment of very diverse model choices, it’s hard to give advice or do experiments that will be useful to a wide variety of people: a prompt that works for Airochronos may be iffy on Chronos-Hermes, so your painstakingly crafted prompt may only be useful for Airochronos users–who may find themselves out in the cold if they decide to toy with a different model. But if we’re all using the same thing, well then! That’s different.

This site is devoted to making it easier for people to use Mythomax (and, to a lesser extent, LLMs in general) for creative, and mostly hobbyist, purposes. It aspires to cover both basic usage guides and advanced prompt crafting techniques. It centers on what I’ve been calling the “Ask Max” method, which in short is a way of interrogating your model to make sure that it understands your prompt in the way that you meant it.

Currently, we’re still under construction: a lot of pages are copy+pasted from Rentry, and some are stubs. But there’s a lot planned. And the whole site will be available on Github, where you will be able to file issues and start discussions, or submit pull requests for changes you want to see. Ultimately, I wanted to take this information out of the realm of Discord servers and scattered Rentry pages, and create a centralized home for it.

What Tests?

If you’ve talked to me on Discord for more than two seconds, you’ve probably heard me reference some nebulous tests that I run on various models, regarding sampler settings, system prompts, formats, and so on. I was a QA engineer in a previous life, so test design is something I take very seriously. So I figure, if I’m talking about them so much, it behooves me to actually talk about what they cover and how I evaluate them.
Read more →

Mixtral Spiral

Mixtral is awesome. It has a huge flaw though, which many of you have already noticed. It’s been referred to in a lot of ways: looping, parroting, going psycho, going schizo, throwing a tantrum, etc. All of them refer to the phenomenon where, without warning, and often around the 4k token mark, Mixtral will suddenly start repeating or paraphrasing the prompt, failing in reading comprehension, speaking out of character, speaking for the user, sprinkling its responses with random punctuation, and sometimes losing coherence completely and descending into word salad.
Read more →

Mixtral 8x7b Verdict

There’s a new kid on the block – or, well, the smarter, chonkier cousin of the previous new kid on the block, Mistral. Mixtral 8x7b is impressively smart, writes varied prose, and is fantastic at imitating whatever voice or writing style you’re going for. And the fact that it’s made up of 7b models means it’s not much slower than a 7b model once it’s started generating. On the downside, oh my god the size, oh my god the prompt processing times.
Read more →

Clarity Alpha

Parroting is my white whale. (Okay, one of them.) It’s a tough issue to address, since there are so many possible causes. My theory, which I’ve gone into some detail about on the symptoms page, is that parroting happens when the model doesn’t know what else to do: either it’s confused or it’s short on possible responses, so it falls back to the “safe” option of just repeating the prompt. There are, therefore, only really 1 or 2 causes – and confusion is a major one.
Read more →