← All model leaderboards · Updated 2026-04-06

image generation

AI Image Generation Model Leaderboard

Compare the top AI image generation models across Artificial Analysis, Design Arena, and Arena AI - unified rankings, source-by-source scores, speed, and API pricing in one table.

Top consensus: GPT Image 1.5 (high)

Try AI image generation

Models ranked

56

in this table

Leading model

GPT Image 1.5 (high)

99.3 consensus

Median consensus

53.7

typical model in this list

Gap to 2nd

0.8 pts

consensus points, 1st vs 2nd

Showing 56 of 56 · Snapshot 2026-04-06
SourcesArtificial AnalysisDesign ArenaArena AI
🥇

GPT Image 1.5 (high)

OpenAI

99.3

AA

#1

1266

DA

#1

1334

Ar

#2

1244

Speed 42.1s$0.13/img
🥈

Nano Banana 2 (Gemini 3.1 Flash Image Preview)

Google

98.6

AA

#2

1257

DA

#2

1327

Ar

#1

1265

Speed 26.5s$0.07/img
🥉

Nano Banana Pro (Gemini 3 Pro Image)

Google

95.0

AA

#3

1214

DA

#4

1291

Ar

#3

1233

Speed 25.8s$0.13/img
4

MAI Image 2

Microsoft AI

92.2

AA

DA

Ar

#5

1190

Speed 10s
5

FLUX.2 [max]

Black Forest Labs

90.6

AA

#4

1200

DA

Ar

#8

1166

Speed 27.9s$0.07/img
6

FLUX.2 [flex]

Black Forest Labs

86.4

AA

#7

1180

DA

#6

1262

Ar

#10

1158

Speed 21.3s$0.06/img
7

FLUX.2 [pro]

Black Forest Labs

85.5

AA

#6

1182

DA

#7

1246

Ar

#11

1158

Speed 15.7s$0.03/img
8

FLUX.2 [dev] TurboOpen Weights

Fal

81.0

AA

#11

1163

DA

Ar

Speed 4.3s$0.008/img
9

grok-imagine-image

xAI

80.4

AA

#9

1170

DA

#14

1228

Ar

#7

1173

Speed 4.4s$0.02/img
10

Imagen 4 Ultra

Google

79.7

AA

#10

1169

DA

#8

1240

Ar

#15

1148

Speed 11.2s$0.06/img
11

Nano Banana (Gemini 2.5 Flash Image)

Google

79.4

AA

#12

1163

DA

#10

1234

Ar

#12

1155

Speed 7.7s$0.04/img
12

Seedream 4.5

ByteDance Seed

79.3

AA

#8

1172

DA

Ar

#16

1144

Speed 21.5s$0.04/img
13

Seedream 4.0

ByteDance Seed

77.8

AA

#5

1185

DA

#12

1233

Ar

#17

1141

Speed 17.4s$0.03/img
14

grok-imagine-image-pro

xAI

76.6

AA

#19

1136

DA

Ar

#9

1162

Speed 17.1s$0.07/img
15

ImagineArt 1.5 Preview

ImagineArt

75.9

AA

#15

1148

DA

Ar

Speed $0.03/img
16

FLUX.2 [dev]Open Weights

Black Forest Labs

73.5

AA

#17

1146

DA

Ar

#14

1148

Speed 3.4s$0.01/img
17

FLUX.2 [dev] FlashOpen Weights

Fal

70.7

AA

#18

1137

DA

Ar

Speed 4.3s$0.005/img
18

Wan 2.5 Preview

Alibaba

67.5

AA

#16

1148

DA

Ar

#21

1117

Speed
19

Wan 2.6 Image

Alibaba

66.0

AA

#20

1134

DA

Ar

#19

1135

Speed $0.03/img
20

GPT Image 1 (high)

OpenAI

65.6

AA

#24

1127

DA

#9

1238

Ar

#23

1115

Speed 44s$0.17/img
21

Wan2.6 Text to Image

Alibaba

63.8

AA

#21

1133

DA

Ar

Speed $0.03/img
22

Qwen Image Max 2512Open Weights

Alibaba

62.3

AA

#13

1149

DA

#26

1167

Ar

#18

1136

Speed 16.1s$0.02/img
23

Seedream 3.0

ByteDance Seed

61.3

AA

#14

1149

DA

Ar

#29

1083

Speed $0.03/img
24

Seedream 5.0 Lite

Bytedance

60.3

AA

#29

1116

DA

#11

1234

Ar

#24

1115

Speed 38.9s$0.04/img
25

HunyuanImage 3.0 (Fal)Open Weights

Tencent

59.3

AA

#27

1120

DA

#24

1177

Ar

#13

1151

Speed 35.9s$0.10/img
26

Vivago 2.1

HiDream

58.6

AA

#25

1126

DA

Ar

Speed $0.04/img
27

Kolors 2.1

KlingAI

56.9

AA

#26

1123

DA

Ar

Speed $0.01/img
28

Recraft V4 Pro

Recraft

55.4

AA

#23

1129

DA

#23

1202

Ar

Speed 32.9s$0.25/img
29

Imagen 4 Standard

Google

52.1

AA

#39

1096

DA

#17

1223

Ar

#20

1132

Speed 8.4s$0.04/img
30

Recraft V4

Recraft

51.2

AA

#28

1120

DA

Ar

#27

1099

Speed 13s$0.04/img
31

FLUX.1 Kontext [max]

Black Forest Labs

47.8

AA

#31

1114

DA

#20

1205

Ar

#31

1075

Speed 33.8s$0.08/img
32

Reve Image (Halfmoon)

Reve

47.0

AA

#40

1092

DA

#35

1111

Ar

#6

1177

Speed 6.2s
33

FLUX.2 [klein] 9BOpen Weights

Black Forest Labs

46.9

AA

#22

1133

DA

#28

1153

Ar

#32

1067

Speed 5.1s$0.01/img
34

Z-Image TurboOpen Weights

Alibaba

46.6

AA

#30

1115

DA

Ar

#30

1077

Speed 1.7s$0.005/img
35

GPT Image 1 Mini (medium)

OpenAI

46.4

AA

#46

1072

DA

#13

1231

Ar

#26

1104

Speed 41.6s$0.01/img
36

Eigen Image

Eigen AI

43.1

AA

#34

1106

DA

Ar

Speed $0.03/img
37

Vidu Q2

Vidu

42.5

AA

#32

1113

DA

#27

1163

Ar

Speed 21.3s$0.03/img
38

Imagen 3 (v002)

Google

41.4

AA

#36

1103

DA

#21

1205

Ar

#35

1058

Speed 6.9s$0.04/img
39

Lucid Origin Fast

Leonardo.Ai

41.4

AA

#35

1105

DA

Ar

Speed 12s$0.02/img
40

HunyuanImage 3.0 Instruct (Fal)Open Weights

Tencent

37.9

AA

#37

1103

DA

Ar

Speed 39s$0.09/img
41

Vivago 2.0

HiDream

36.2

AA

#38

1103

DA

Ar

Speed
42

FLUX.1 Kontext [pro]

Black Forest Labs

36.1

AA

#42

1080

DA

#25

1168

Ar

#34

1059

Speed 16.8s$0.04/img
43

Lucid Origin Ultra

Leonardo.Ai

30.3

AA

#33

1111

DA

Ar

#44

1013

Speed 22s$0.09/img
44

FLUX1.1 [pro] Ultra

Black Forest Labs

28.3

AA

#41

1089

DA

#32

1129

Ar

Speed 11.8s$0.06/img
45

Dreamina 3.1

Bytedance

27.6

AA

#43

1079

DA

Ar

Speed 11.1s$0.03/img
46

Ideogram 3.0

Ideogram

27.3

AA

#44

1076

DA

#31

1142

Ar

#37

1049

Speed 9.8s$0.06/img
47

MAI Image 1

Microsoft AI

26.9

AA

#55

1033

DA

Ar

#28

1093

Speed 20s
48

Recraft V3

Recraft

25.2

AA

#47

1072

DA

#30

1147

Ar

#41

1021

Speed 7.7s$0.04/img
49

Qwen Image Plus 2601

Alibaba

22.1

AA

#50

1065

DA

Ar

#33

1061

Speed 4.5s$0.03/img
50

FLUX1.1 [pro]

Black Forest Labs

20.0

AA

#45

1075

DA

Ar

#43

1016

Speed 3.3s$0.04/img
51

Qwen ImageOpen Weights

Alibaba

18.1

AA

#54

1058

DA

#33

1121

Ar

#36

1057

Speed 28.1s$0.02/img
52

FLUX.2 [klein] 4BOpen Weights

Black Forest Labs

17.3

AA

#48

1068

DA

#37

1105

Ar

#40

1021

Speed 5.1s$0.01/img
53

Luma Photon

Luma Labs

14.6

AA

#53

1063

DA

Ar

#38

1035

Speed 18.5s$0.02/img
54

FLUX.1 [pro]

Black Forest Labs

13.8

AA

#49

1067

DA

Ar

Speed 4.3s$0.05/img
55

P-Image

Pruna AI

11.4

AA

#51

1064

DA

#40

1087

Ar

#39

1034

Speed 1.9s$0.005/img
56

Imagen 4 Fast

Google

10.3

AA

#52

1064

DA

#36

1109

Ar

Speed 4.1s$0.02/img

Methodology

Each source uses preference data to estimate skill scores. We map ranks to percentiles and average where a model appears on multiple lists. The bar in the Consensus column is green; purple, rose, amber match Artificial Analysis, Design Arena, Arena AI columns. Speed is approximate time to the first image.

FAQ

Answers below use the same snapshot as the table above (as of 2026-04-06, 56 models). Figures are from our export, not live pages at Artificial Analysis, Design Arena, or Arena AI—those sites may have moved on since we built this snapshot. The Consensus column is our average of percentile ranks across benchmarks where each model appears.

We use each model's released field from the export. Among rows with a parseable date, the newest in this snapshot include: Nano Banana 2 (Gemini 3.1 Flash Image Preview) (2026-02-01); grok-imagine-image-pro (2026-02-01); Seedream 5.0 Lite (2026-02-01).

By default we sort by Consensus, so GPT Image 1.5 (high) leads this snapshot at 99.3 (average percentile across benchmarks where the model appears). By Elo in the Artificial Analysis column alone, GPT Image 1.5 (high) is highest at 1266. “Best” still depends on price, latency, and which benchmarks you care about—use the sortable table.

The Elo values below are the Artificial Analysis numbers in this export (2026-04-06), not necessarily what you see on Artificial Analysis today:
  1. GPT Image 1.5 (high) — Elo 1266
  2. Nano Banana 2 (Gemini 3.1 Flash Image Preview) — Elo 1257
  3. Nano Banana Pro (Gemini 3 Pro Image) — Elo 1214
  4. FLUX.2 [max] — Elo 1200
  5. Seedream 4.0 — Elo 1185

Text-to-image models generate new images from a text prompt alone. Image editing models take both an input image and editing instructions, then return a modified version. Our table is built for text-to-image leaderboards from the sources named in the header.

Each upstream source runs preference tests and publishes ranks or scores. We map those to percentiles within each benchmark, then average across benchmarks where a model appears—that is the Consensus column (see Methodology above the FAQ). Per-source columns show the ranks and scores stored in our snapshot for Artificial Analysis, Design Arena, and Arena AI. To change upstream leaderboards, participate on those sites; our table updates when we refresh the export.

We flag open-weights rows from the export name suffix “Open Weights”. By Artificial Analysis Elo in this snapshot, the highest are:
  1. FLUX.2 [dev] Turbo — Elo 1163
  2. Qwen Image Max 2512 — Elo 1149
  3. FLUX.2 [dev] — Elo 1146
Treat naming as a signal only—confirm license terms with each provider before production use.

Elo in our table is the value from the snapshot for the Artificial Analysis column (and similar skill estimates elsewhere). Margin of error / intervals (e.g. in CI columns) come from that same export. For how Artificial Analysis computes Elo from votes, see their methodology; our numbers stay fixed until the next snapshot refresh.