#html: default
MLIR (Multi-Level Intermediate Representation) was introduced by Google in April 2019 and is designed to serve as an IR from the outset. It provides various forms:
Ops (general purpose to domain specific) on tensor types / memref types
%patches = "tf.reshape"(%patches, %minus_one, %minor_dim_size)
: (tensor<? x ? x ? x ? x f32>, index, index) −> tensor<? x ? x f32>
%mat_out = "tf.matmul"(%patches_flat, %patches_flat){transpose_a : true}
: (tensor<? x ? x f32>, tensor<? x ? x f32>) −> tensor<? x ?
x f32>
%vec_out = "tf.reduce_sum"(%patches_flat) {axis: 0}
: (tensor<? x ? x f32>) −> tensor<? x f32>
no phi nodes, basic blocks take arguments
~~~{plaintext} func @condbr_simple() -> (i32) { %cond = “foo”() : () -> i1 %a = “bar”() : () -> i32 %b = “bar”() : () -> i64
^bb1(%x : i32): %w = “foo_bar”(%x) : (i32) -> i64 br ^bb2(%w: i64)
^bb2(%y : i64): %z = “abc”(%y) : (i64) -> i32 return %z : i32
} ~~~