The textual pass pipeline has a bit more overhead due to the string parsing, but it reduces the required maintenance as we don't have to write CAPI and Python bindings for all the pass options.
This commit refactors the AIG longest path analysis C API to use native C structures instead of JSON strings, providing better performance and type safety.
The API changes replace `aigLongestPathCollectionGetPath` returning JSON with `aigLongestPathCollectionGetDataflowPath` returning native objects. New opaque handle types are added including `AIGLongestPathObject`, `AIGLongestPathHistory`, and `AIGLongestPathDataflowPath`. Comprehensive APIs are provided for accessing path data, history, and object properties.
InstancePath C API support is introduced in `circt-c/Support/InstanceGraph.h`. Currently `InstancePathCache` itself is not provided, as the use of LongestPathAnalysis is read-only and there is no need to mutate/construct InstancePath. Unfortunately due to that testing of CAPI of InstancePath got a bit tricky. For now AIGLongestPathAnalysis is used to produce InstancePath in CAPI.
The Python binding updates refactor Object, DataflowPath, and LongestPathHistory classes to use the native C API. JSON parsing dependencies and from_json_string() methods are removed. Proper property accessors using the new C API are added while maintaining backward compatibility for existing Python interfaces. So the existing integration tests cover most of the APIs.
Testing updates include comprehensive coverage in the existing C API tests in `test/CAPI/aig.c`. A new `test/CAPI/support.c` is added for InstancePath API testing. Python integration tests are updated to work with the new API.
This change improves performance by eliminating JSON serialization/deserialization overhead and provides a more robust, type-safe interface for accessing longest path analysis results.
This commit introduces C API bindings for AIG LongestPathAnalysis and LongestPathCollection, enabling longest path analysis of AIG circuits from C and other languages.
The API uses JSON serialization for path data exchange, providing a stable interface while the underlying data structures evolve. Paths are automatically sorted by delay in descending order for efficient critical path analysis.
This commit implements a complete AIGER parser that supports both ASCII (.aag) and binary (.aig) formats following the AIGER format specification from https://fmv.jku.at/aiger/FORMAT.aiger. AIGER is a popular format widely used by the ABC synthesis tool and serves as a standard for logic synthesis benchmarks, making it essential for integrating with efficient solvers and verification tools. The parser handles all core AIGER components including inputs, latches, outputs, and AND gates, with support for binary format delta compression for AND gates. It parses optional symbol tables and comments sections, creating MLIR modules using HW, AIG, and Seq dialects with BackedgeBuilder for handling forward references.
This is a first commit to add a Language Server Protocol (LSP) implementation
for Verilog/SystemVerilog using the slang frontend. This enables IDE features
like syntax error checking and diagnostics.
The server is built as circt-verilog-lsp-server and integrates with CIRCT's
existing Verilog import capabilities. It leverages MLIR's LSP server support
libraries for the protocol implementation.
Add the `circt-test` tool. This tool is intended to be a driver for
discovering and executing hardware unit tests in an MLIR input file. In
this first draft circt-test simply parses an MLIR assembly or bytecode
file and prints out a list of `verif.formal` operations.
This is just a starting point. We'll want this tool to be able to also
generate the Verilog code for one or more of the listed unit tests, run
the tests through tools and collect results, and much more. From a user
perspective, calling something like `circt-test design.mlirbc` should
do a sane default run of all unit tests in the provided input. But the
tool should also be useful for build systems to discover tests and run
them individually.
Practically it is very useful to verify equivalence between modules in two different MLIR files. This commit changes `inputFilename` to a list and implements a very simple module merge that moves operations in the second module to the first by resolving the symbol.
This is the first PR in a longer chain that adds basic SV support to
CIRCT.
Add the Slang Verilog frontend as a CIRCT dependency. This will be the
foundation for CIRCT's Verilog parsing, elaboration, type checking, and
lowering to the core dialects. By default, Slang is built as a static
library from scratch, which is then linked into the new `ImportVerilog`
conversion. Alternatively, CIRCT can also be linked against a local
Slang installation provided by the system.
Add the `ImportVerilog` conversion library. This library statically
links in the Slang dependency and wraps it in an exception-safe,
LLVM-style API. Currently this only consists of the `getSlangVersion`
function and the necessary linking flags to get it to link statically
against Slang.
Add the `circt-verilog` tool, which will provide a fully-flegded
interface to the new `ImportVerilog` library. Later on we'll also add an
MLIR translation library for single-file SV import. But in general, SV
builds take a lot of command line options (macros, search paths, etc.)
and multiple input files, which is why we have a dedicated tool. All the
tool does at the moment is print the linked Slang version. More to come.
Note that this intentionally links against **version 3** of Slang. Newer
versions are available -- 4 and 5 as of this commit -- but they rely on
fairly new C++ compiler features that didn't work out of the box in our
CI images. We'll eventually want to upgrade, but for now Slang 3 is
sufficient to get the ball rolling.
See https://github.com/MikePopoloski/slang for details on Slang.
Co-authored-by: ShiZuoye <albertethon@163.com>
Co-authored-by: hunterzju <hunter_ht@zju.edu.cn>
Co-authored-by: 孙海龙 <hailong.sun@terapines.com>
Adds `ibistool` - a tool for driving Ibis lowerings. The tool has two
modes - low-level and high-level Ibis lowering.
Alongside this, introduce a set of Ibis pass pipelines which other users
may load to ensure that they're lowering ibis constructs in the
standard order.
This commit is based on firld work by Will. This commit adds a boilerplate and
minimal implementation for `om-linker` which aims to link separated objectmodel
IRs into a single IR.
With this PR `om-linker` is able to read input MLIR files, and concat into
a single MLIR file without flattening ModuleOp. Currently it doesn't perform
any symbol resolution. For instance, we link following IR
```a.mlir
om.class @A(%arg: i1) {}
```
```b.mlir
om.class.extern @A(%arg: i1) {}
om.class @b(%arg: i2) {}
```
`$ om-linker a.mlir b.mlir`
```mlir
module {
module {
om.class @A(%arg: i1) {}
}
module {
om.class.extern @A(%arg: i1) {}
om.class @b(%arg: i2) {}
}
}
```
The actual linking pass will be added in a follow-up PR.
Co-authored-by: Will Dietz <will.dietz@sifive.com>
This adds the usual CAPI structure and dialect registration
boilerplate, as well as CAPIs around the Evaluator library.
The APIs are intended to be as minimal and straightforward as
possible, simply wrapping and unwrapping the C++ structures when
possible.
One slight divergence is in the ObjectValue type, which is a
std::variant between an Object shared pointer or an Attribute. The
normal approach of casting to and from a void pointer does not work
with std::variant, so a different representation of ObjectValue is
used instead. The discriminated union is simply represented as a
struct which only over has one field set. It might be possible to save
some space using a struct with a C union and a flag, but the
simplicity of the current approach seemed reasonable.
Another minor detail worth mentioning is that we must take some care
to ensure the shared pointers to Objects have their reference count
kept up to date. In the CAPI for the instantiate method, if we simply
return the shared pointer, the reference will be lost as the Object
pointer travels to C as a void pointer, so we allocate a new shared
pointer in the CAPI, which ensures the reference count accurately
reflect that we have handed out another reference.
Add the `arcilator` convenience tool to make experimenting with the
dialect easier. The intended pass sequence performs the full conversion
from a circuit cut along port boundaries (through modules) to a circuit
cut along the state elements (through arcs). The tool simply executes
this pass pipeline.
Co-authored-by: Martin Erhart <maerhart@outlook.com>
Co-authored-by: Zachary Yedidia <zyedidia@gmail.com>
The handshake-runner tests take a large amount of time and do execute their input. Thus, it makes more sense to have them as part of the integration tests.
- Update/rewrite the `circt-reduce` tool with a custom proof-of-concept
reducer for the FIRRTL dialect. This is supposed to be a pathfinding
exercise and just uses FIRRTL as an example. The intent is for other
dialects to be able to produce sets of their own reducers that will
then be combined by the tool to operate on some input IR.
At this point, `circt-reduce` can be used to reduce FIRRTL test cases by
converting as many `firrtl.module` ops into `firrtl.extmodule` ops as
possible while still maintaining some interesting characteristic.
- Extend `circt-reduce` to support exploratively applying passes to the
input in an effort to reduce its size. Also add the ability to specify
an entire list of potential reduction strategies/patterns which are
tried in order. This allows for reductions with big effect, like
removing entire modules, to be tried first, before taking a closer
look at individual instructions.
- Add reduction strategies to `circt-reduce` that try to replace the
right hand side of FIRRTL connects with `invalidvalue`, and generally
try to remove operations if they have no results or no users of their
results.
- Add a reduction pattern to `circt-reduce` that replaces instances of
`firrtl.extmodule` with a `firrtl.wire` for each port. This can
significantly reduce the complexity of test cases by pruning the
module hierarchy of unused modules.
- Move the `Reduction` class and sample implementations into a separate
header and implementation file. These currently live in
`tools/circt-reduce`, but should later move into a dedicated reduction
framework, maybe in `include/circt/Reduce`, where dialects can easily
access them and provide their own reduction implementations.
Add two functions which are intended to be exposed over an API:
- A function to heuristically locate signal port triplets with ready valid on an RTL module.
- A function which takes an RTL module and a list of those port triplets and build a 'shell' around that module which 'uplifts' the port triplets to ESI channels.
Adds an 'esi-tester' binary to execute these two functions.
This adds a -shared-libs option to llhd-sim that is analogous to the
same option for mlir-cpu-runner. If specified, those libraries are
passed to the ExecutionEngine to dynamically load and link.
* Scaffolding for Capnp-dependent ESI code
* Adding 'capnp' feature
* Replicated functionality
* Just missing the complex part: schema parsing
* Parse the generated schema, get the size out of that
* Documentation
* Adding NOLINT
* Enable cloning llvm submodule over HTTP
* Introduce C API
* Undo unrelated changes
* clang-format
* More format
* Add Header Comments
* Format
* Add basic test
* Add missing incantation
* Format
Co-authored-by: George <>
* Merge LLHD project into CIRCT
* Split LLHDOps.td into multiple smaller files
* move LLHDToLLVM init definition and prune includes
* Format tablegen files with 2 space indent, 80 col width; move out trait helper function
* Move implementation logic from LLHDOps.h to cpp file
* Empty lines for breathing space; nicer operation separators
* Move simulator to Dialect/LLHD/Simulator
* move `State.h` and `signal-runtime-wrappers.h` to lib directory
* pass ModuleOp by value
* make getters const, return ModuleOp by value
* Use isa, cast, dyn_cast appropriately
* wrap struct in anon namespace; make helpers static
* [cmake] Fold into LINK_LIBS
* fix for loops
* replace floating point with `divideCeil`
* prune redundant includes
* make llhd-sim helpers static
* remove StandardToLLVM pass registration
* move verilog printer to cpp file, add global function as public API
* Move transformation pass base classes and registration to lib, add file header boilerplate
* Few improvements
* Return diagnostics directly
* isa instead of kindof
* Improve walks
* etc.
* add 'using namespace llvm;' again
* Always pass a location when creating new ops
* Improve cmake files
* remove unnecessary `LLVMSupport` links.
* add `PUBLIC` where missing.
* move LLVMCore under `LINK_COMPONENTS`.
* Add file headers and improve simulator comments
* Some LLHDToLLVM improvements
* Fix walks.
* Use `std::array` instead of `SmallVector` when resize is not needed.
* Fix a potential sefgault by passing an ArrayRef with no data owner.
* Remove some unnecessary steps.
* Remove some unnecessary const strings.
* Add new LowerToLLVMOptions argument
The new argument was added in 10643c9ad85bf072816bd271239281ec50a52e31.
* Add missing file header boilerplate and newline
* Improve for-loop
* use static instead of anonymous namespace for functions
* fit to 80 columns, cast instead of dyn_cast
* Changes for LLVM update
* use llvm format instead of std::stringstream and iomanip
Co-authored-by: rodonisi <simon@rodoni.ch>
This also updates the README to include some building information.
Lots of caveats:
- This is all experimental
- The actual tool isn't interesting yet.
- The naming is arbitrary and will likely change.
- Much of the cmake files were cargo culted from other places
because I don't know what I'm doing.