blace.ai
|
For an easy start simply visit the Quickstart or Quickstart from Hub page.
Integrating blace.ai into your project is super simple. You need to:
include("../cmake/FindBlace.cmake")
target_link_libraries(<your_target> PRIVATE 3rdparty::BlaceAI)
Important: This setup is the same on all operating systems so you can start developing e.g. on Windows and later deploy from Ubuntu by simply downloading a different (the Ubuntu) version of the package there. The folder structure is always the same. The same goes for coding: The same code works across all operating systems.
Add
at the top of your source file. This will make all headers available.
blace.ai makes use of a so-call computation graph to execute all commands. This allows for implicit caching of reusable results (like model inferences with exact same arguments) across runs. Therefore usage is split into two phases, graph construction and graph execution.
All models that are coming from the model hub or our converter tool (to be released soon) consist of two artifacts: The .h model header and the .bin payload.
In order to register the model you need to
std::vector<char>
contained in the header: Now you can use the identifier gemma_v1_default_v1_ALL_export_version_v10_IDENT
to refer to the declared model.
First, you construct the computation graph (a DAG). Refer to public_ops.h to see an overview of all available operators (we have a limited set during beta but will roll out the rest of the operators soon).
Such a construction could look like (taken from the Gemma demo project):
Note how five input nodes are constructed and fed into the infer_op
construction. All relevant model inference arguments are hold by a blace::ml_core::InferenceArgsCollection object. Important: At this point, no model is loaded or executed. We simply define the execution structure.
Now that we have constructed the graph we can execute it. We do so by constructing a blace::computation_graph::GraphEvaluator from the last node (whichs result we want to get) and running the evaluation:
If the evaluation fails answer will hold a std::nullopt
.
Our library will never throw exceptions at you. Instead, all calls to methods through the api wrap the result in a std::optional
which will hold a std::nullopt
in case of failure. The console will print the error message in this case.
Our model hub contains a growing list of compatible models which you can integrate in your application with a few lines of code. Check Quickstart from Hub to learn how to run the provided demo projects.