← All model leaderboards · Updated 2026-04-06

image to video

Image to Video Model Leaderboard

Compare the top AI image to video generation models across Artificial Analysis, Design Arena, and Arena AI - unified rankings, source-by-source scores, speed, and API pricing in one table.

Top consensus: Dreamina Seedance 2.0 720p

Try AI video generation

Models ranked

58

in this table

Leading model

Dreamina Seedance 2.0 720p

100.0 consensus

Median consensus

55.1

typical model in this list

Gap to 2nd

1.1 pts

consensus points, 1st vs 2nd

Showing 58 of 58 · Snapshot 2026-04-06
SourcesArtificial AnalysisDesign ArenaArena AI
🥇

Dreamina Seedance 2.0 720p

ByteDance Seed

100.0

AA

#1

1356

DA

Ar

Speed 35s
🥈

grok-imagine-video

xAI

98.9

AA

#3

1333

DA

#1

1328

Ar

#1

1404

Speed 23.1s$4.20/min
🥉

PixVerse V6

PixVerse

98.4

AA

#2

1339

DA

Ar

Speed $5.40/min
4

Veo 3 Fast

Google

96.3

AA

DA

#2

1290

Ar

Speed 39.8s
5

SkyReels V4

Skywork AI

91.9

AA

#6

1292

DA

Ar

Speed $7.20/min
6

Veo 3.1 Fast

Google

91.0

AA

#7

1291

DA

#3

1279

Ar

#4

1383

Speed 43.8s$6.00/min
7

Kling 2.6 Standard (January)

KlingAI

85.5

AA

#10

1282

DA

Ar

Speed
8

Vidu Q3 Pro

Vidu

85.2

AA

#9

1287

DA

Ar

#7

1353

Speed 138.7s$9.60/min
9

Wan 2.7

Alibaba

85.2

AA

DA

#5

1276

Ar

Speed 214s
10

Veo 3.1

Google

83.5

AA

#23

1247

DA

#4

1276

Ar

#2

1402

Speed 48.9s$12.00/min
11

PixVerse V5.5

PixVerse

82.3

AA

#12

1275

DA

Ar

Speed $6.40/min
12

Kling 3.0 1080p (Pro)

KlingAI

81.0

AA

#11

1277

DA

#6

1273

Ar

#9

1334

Speed 218s$13.44/min
13

PixVerse V5

PixVerse

80.6

AA

#13

1272

DA

Ar

Speed $6.40/min
14

Kling 3.0 Omni 1080p (Pro)

KlingAI

79.1

AA

#4

1298

DA

#11

1204

Ar

Speed 95s$13.44/min
15

Kling 3.0 720p (Standard)

KlingAI

79.0

AA

#14

1267

DA

Ar

Speed 64.3s$10.08/min
16

Kling 3.0 Omni 720p (Standard)

KlingAI

77.4

AA

#15

1266

DA

Ar

Speed 70.4s$10.08/min
17

TeleVideo 2.0

TeleAI

75.8

AA

#16

1266

DA

Ar

Speed
18

PixVerse V5.6

PixVerse

75.7

AA

#8

1291

DA

Ar

#15

1279

Speed $9.00/min
19

Wan 2.5 Preview

Alibaba

75.0

AA

#20

1252

DA

Ar

#8

1339

Speed 102s$9.00/min
20

Runway Gen-4.5

Runway

74.2

AA

#17

1264

DA

Ar

Speed
21

Sora 2

OpenAI

74.1

AA

DA

#8

1248

Ar

Speed 87s
22

Sora 2 Pro

OpenAI

70.4

AA

DA

#9

1248

Ar

Speed 136s
23

Veo 3

Google

69.7

AA

#28

1238

DA

#7

1259

Ar

#10

1331

Speed 42.4s$12.00/min
24

Seedance 1.5 pro

ByteDance Seed

67.4

AA

#21

1251

DA

#10

1208

Ar

#12

1300

Speed 83s$5.93/min
25

Kling 2.5 Turbo 1080p

KlingAI

67.0

AA

#5

1295

DA

#14

1183

Ar

#17

1272

Speed 89s$4.20/min
26

Kling 2.6 Pro (January)

KlingAI

65.2

AA

#18

1254

DA

#12

1201

Ar

#14

1289

Speed 115s$4.20/min
27

Hailuo 2.3 Fast

MiniMax

62.9

AA

#24

1246

DA

Ar

Speed 86.5s$1.00/min
28

Wan 2.6

Alibaba

59.1

AA

#31

1225

DA

Ar

#13

1297

Speed $9.00/min
29

Hailuo 2.3

MiniMax

55.3

AA

#22

1251

DA

#15

1182

Ar

#19

1255

Speed 186s$2.80/min
30

Hailuo 02 Pro

MiniMax

54.9

AA

#19

1253

DA

Ar

#23

1228

Speed $4.90/min
31

Vidu Q2 Turbo

Vidu

51.3

AA

#27

1240

DA

Ar

#21

1244

Speed $4.00/min
32

Seedance 1.0

ByteDance Seed

51.0

AA

#25

1242

DA

#19

1125

Ar

#16

1272

Speed 56s$7.32/min
33

Kling 2.1 Master

KlingAI

45.8

AA

#32

1219

DA

Ar

#22

1232

Speed $16.80/min
34

Ray 3

Luma Labs

45.5

AA

#30

1229

DA

#13

1197

Ar

#27

1221

Speed 64s$13.20/min
35

Hailuo 02 Standard

MiniMax

45.1

AA

#26

1241

DA

Ar

#26

1222

Speed $2.80/min
36

MiniMax Hailuo-2.3 (Standard)

MiniMax

44.4

AA

DA

#16

1175

Ar

Speed 103s
37

Vidu Q2 Pro

Vidu

44.1

AA

#29

1235

DA

Ar

#25

1224

Speed $6.10/min
38

Glam.ai 1.0

Glam.ai

40.7

AA

DA

#17

1155

Ar

Speed 156s
39

Kling 2.1 Standard

KlingAI

37.4

AA

#35

1180

DA

Ar

#24

1225

Speed $3.36/min
40

LTX-2.3 FastOpen Weights

Lightricks

35.5

AA

#36

1169

DA

Ar

Speed 20.6s$2.40/min
41

Hailuo 02 Fast

MiniMax

34.5

AA

#33

1201

DA

Ar

#29

1194

Speed $1.00/min
42

LTX-2.3 ProOpen Weights

Lightricks

33.9

AA

#37

1161

DA

Ar

Speed 58.6s$3.60/min
43

P-Video

Pruna AI

30.4

AA

#39

1139

DA

#18

1146

Ar

#28

1195

Speed 30.7s$1.20/min
44

P-Video (720p)

Pruna AI

29.6

AA

DA

#20

1123

Ar

Speed 13.1s
45

Kandinsky 5.0 Pro

AI Forever

25.9

AA

DA

#21

1100

Ar

Speed 268s
46

Seedance 1.0 Mini

ByteDance Seed

23.7

AA

#38

1144

DA

Ar

#31

1182

Speed 28.3s
47

LTX-2 ProOpen Weights

Lightricks

22.8

AA

#34

1197

DA

#24

1052

Ar

#34

1126

Speed 58.9s$3.60/min
48

HunyuanVideo-1.5 (Fal)Open Weights

Tencent

21.8

AA

#40

1120

DA

Ar

#30

1193

Speed 581.8s$4.50/min
49

Kandinsky 5.0

AI Forever

18.5

AA

DA

#23

1070

Ar

Speed 200s
50

Wan 2.2 A14BOpen Weights

Alibaba

15.9

AA

#41

1117

DA

#25

1052

Ar

#32

1167

Speed 48.7s$4.80/min
51

Veo 2

Google

15.2

AA

#43

1106

DA

Ar

#33

1164

Speed 21.7s$30.00/min
52

Wan 2.1 14BOpen Weights

Alibaba

14.3

AA

#45

1000

DA

#22

1075

Ar

Speed 60s$4.80/min
53

Runway Gen 4

Runway

11.9

AA

#42

1111

DA

Ar

#36

1047

Speed
54

Ray 2

Luma Labs

6.5

AA

DA

#26

1044

Ar

#35

1104

Speed 133s
55

Pika 2.2

Pika Art

4.8

AA

#44

1009

DA

Ar

#37

994

Speed
56

Ray 2 Flash

Luma Labs

3.7

AA

DA

#27

1030

Ar

Speed 36.6s
57

Wan 2.2 5BOpen Weights

Alibaba

3.2

AA

#46

988

DA

Ar

Speed 25.1s$1.80/min
58

Hunyuan Video (Fal)Open Weights

Tencent

0.0

AA

#47

933

DA

#28

972

Ar

Speed 602s$4.80/min

Methodology

Each source uses preference data to estimate skill scores. We map ranks to percentiles and average where a model appears on multiple lists. The bar in the Consensus column is green; purple, rose, amber match Artificial Analysis, Design Arena, Arena AI columns. Speed is approximate time to the first image.

FAQ

Answers below use the same snapshot as the table above (as of 2026-04-06, 58 models). Figures are from our export, not live pages at Artificial Analysis, Design Arena, or Arena AI—those sites may have moved on since we built this snapshot. The Consensus column is our average of percentile ranks across benchmarks where each model appears.

We use each model's released field from the export. Among rows with a parseable date, the newest in this snapshot include: Dreamina Seedance 2.0 720p (2026-03-01); PixVerse V6 (2026-03-01); SkyReels V4 (2026-03-01).

By default we sort by Consensus, so Dreamina Seedance 2.0 720p leads this snapshot at 100.0 (average percentile across benchmarks where the model appears). By Elo in the Artificial Analysis column alone, Dreamina Seedance 2.0 720p is highest at 1356. “Best” still depends on price, latency, and which benchmarks you care about—use the sortable table.

The Elo values below are the Artificial Analysis numbers in this export (2026-04-06), not necessarily what you see on Artificial Analysis today:
  1. Dreamina Seedance 2.0 720p — Elo 1356
  2. PixVerse V6 — Elo 1339
  3. grok-imagine-video — Elo 1333
  4. Kling 3.0 Omni 1080p (Pro) — Elo 1298
  5. Kling 2.5 Turbo 1080p — Elo 1295

Text-to-video models generate video clips from a text prompt alone. Image-to-video models take both a reference image and a text prompt, then animate the image into a video clip. Our table is built for image-to-video leaderboards from the sources named in the header.

Each upstream source runs preference tests and publishes ranks or scores. We map those to percentiles within each benchmark, then average across benchmarks where a model appears—that is the Consensus column (see Methodology above the FAQ). Per-source columns show the ranks and scores stored in our snapshot for Artificial Analysis, Design Arena, and Arena AI. To change upstream leaderboards, participate on those sites; our table updates when we refresh the export.

We flag open-weights rows from the export name suffix “Open Weights”. By Artificial Analysis Elo in this snapshot, the highest are:
  1. LTX-2 Pro — Elo 1197
  2. LTX-2.3 Fast — Elo 1169
  3. LTX-2.3 Pro — Elo 1161
Treat naming as a signal only—confirm license terms with each provider before production use.

Elo in our table is the value from the snapshot for the Artificial Analysis column (and similar skill estimates elsewhere). Margin of error / intervals (e.g. in CI columns) come from that same export. For how Artificial Analysis computes Elo from votes, see their methodology; our numbers stay fixed until the next snapshot refresh.