Whatever Miqu is, it has some sort of special sauce. It gets an 83.5 on EQ-Bench (evaluated locally), surpassing *every other LLM in the world except GPT-4*. EQ-Bench has a 0.97 correlation w/ MMLU, and a 0.94 correlation w/ Arena Elo. It *beats* Mistral Medium - at Q4_K_M. I would strongly encourage @lmsysorg to add miqu to the leaderboard so we can properly test it. I originally saw this intriguing EQ-Bench result in a random anon tweet that I can't find - replicated it myself, but if someone knows the link to it - please post in comments so I can credit the idea! Also would be awesome if someone could check miqu for dataset contamination with EQ-Bench.
все равно буду брать GPT, при чем мне интересна подписка чисто из-за GPTs, посмотрел что там делают, очень заинтересовало
https://dtf.ru/u/32166-di-di/2466573-opengpts-ustanovka-i-kratiy-obzor
так что ее можно скачать и попробоватькак?
Вверху есть ссылка на модель