Envisioned Solution

Every time someone uses an LLM they are using energy and tokens. Every prompt that is given to the LLM regularly needs to be given more prompts to get to a response to the user.
PureSpective aims to helps limit the amount of energy and tokens are used per each prompt that is given to an LLM.
We plan on creating a fully functional cross-platform application that helps everyday AI users create better prompts for their chosen large language models (LLM).
We intend on doing this by asking users a short list of questions that based off their response will dictate what then next question will be about their goal of the prompt.
Then the application will give them a plain text file that is to be pasted into their chosen LLM to prep the LLM for the prompt.
The repones from the LLM will be give the user a more accurate response that they are looking for without having to spend as much energy and tokens.