Evaluations
Test, optimize and collaborate on pipeline architectures and prompts.
Cross-provider comparison
Cross-provider comparison
Compare how the same prompt performs against different LLM providers and parameters.
Compare how the same prompt performs against different LLM providers and parameters.
Cross-provider comparison
Compare how the same prompt performs against different LLM providers and parameters.
Bulk quantitative testing
Bulk quantitative testing
Test pipeline scenarios against a saved bank of test cases. Leverage LLMs to score output so with each iteration, you move closer to the desired output.
Test pipeline scenarios against a saved bank of test cases. Leverage LLMs to score output so with each iteration, you move closer to the desired output.
Bulk quantitative testing
Test pipeline scenarios against a saved bank of test cases. Leverage LLMs to score output so with each iteration, you move closer to the desired output.
Shared
workspace
Shared
workspace
Iterate on pipeline scenarios together with a cross-functional team.
Iterate on pipeline scenarios together with a cross-functional team.
Shared workspace
Iterate on pipeline scenarios together with a cross-functional team.
History
tracking
History
tracking
Each scenario and associated outputs are saved so that you can revisit during your iteration cycles.
Each scenario and associated outputs are saved so that you can revisit during your iteration cycles.
History tracking
Each scenario and associated outputs are saved so that you can revisit during your iteration cycles.
How it works
1
Start
with a templateStart by selecting a pipeline you want to optimize.
2
Connect
dataOptionally, add additional pipeline scenarios with different prompts, parameters, architectures, and models.
3
Intuitive drag and drop builder
Create or upload test cases to run your scenarios against.
4
Customize & deploy to end users
Run scenarios against test cases and further iterate!
1
Start
with a templateStart by selecting a pipeline you want to optimize.
2
Connect
dataOptionally, add additional pipeline scenarios with different prompts, parameters, architectures, and models.
3
Intuitive drag and drop builder
Create or upload test cases to run your scenarios against.
4
Customize & deploy to end users
Run scenarios against test cases and further iterate!
1
Start
with a templateStart by selecting a pipeline you want to optimize.
2
Connect
dataOptionally, add additional pipeline scenarios with different prompts, parameters, architectures, and models.
3
Intuitive drag and drop builder
Create or upload test cases to run your scenarios against.
4
Customize & deploy to end users
Run scenarios against test cases and further iterate!
1
Start
with a templateStart by selecting a pipeline you want to optimize.
2
Connect
dataOptionally, add additional pipeline scenarios with different prompts, parameters, architectures, and models.
3
Intuitive drag and drop builder
Create or upload test cases to run your scenarios against.
4
Customize & deploy to end users
Run scenarios against test cases and further iterate!
VectorShift Docs
Unlock advanced features and detailed guides in our extensive documentation.
pipeline_setup.py
from vectorshift.node import *
from vectorshift.pipeline import *
file_node = InputNode(name='file_input', input_type='file')
model_text_node = TextNode(text='Describe this file to me.')
llm_node = OpenAI_LLMNode(
model='gpt-4.0-turbo',
system_input=model_text_node.output(),
prompt_input=fileloader_node.output()
)
output_node = OutputNode(
name='my_output',
output_type='text',
input=llm_node.output()
)
VectorShift Docs
Unlock advanced features and detailed guides in our extensive documentation.
pipeline_setup.py
from vectorshift.node import *
from vectorshift.pipeline import *
file_node = InputNode(name='file_input', input_type='file')
model_text_node = TextNode(text='Describe this file to me.')
llm_node = OpenAI_LLMNode(
model='gpt-4.0-turbo',
system_input=model_text_node.output(),
prompt_input=fileloader_node.output()
)
output_node = OutputNode(
name='my_output',
output_type='text',
input=llm_node.output()
)
© 2023 VectorShift, Inc. All Rights Reserved.
© 2023 VectorShift, Inc. All Rights Reserved.
© 2023 VectorShift, Inc. All Rights Reserved.