My first composer package: Add AI powered fixes to your Laravel error pages
After finishing my blog post Add AI powered fixes to your Laravel error pages recently, I figured I'd release this as a composer package. I wanted to do that primarily as a learning exercise: Despite having worked with PHP for well over 20 years now, and having used composer longer than I can remember, I've actually never created a composer package.
So here goes my first composer package: nanos/openai-exceptions - a really simple package to show AI powered fixes on your Laravel error pages. If you want to know how this works under the hood, see the blog post linked above, but if you just want to use it, simply require
it in your Laravel project as a dev dependency:
composer require --dev nanos/openai-exceptions
Then publish the configuration file:
php artisan vendor:publish --provider="Nanos\OpenaiExceptions\OpenAiSolutionServiceProvider" --tag="config"
And supply your OpenAI API key in your .env
file:
OPENAI_API_KEY={YOUR KEY HERE}
To test, make mistakes, and you should see an output like in the screenshot below:
Configuration
You can configure a fair bit of this package in the config/openai-exceptions.php
file. It's all documented in there, but here is a summary:
-
canSolve(Throwable $throwable): bool
This method decides if OpenAI will be used for your exception: Returnfalse
to exclude exceptions from processing with OpenAI. -
cache
By default this package caches OpenAI responses for the same prompt for a week, both to reduce cost, and to increase speed. You can define your own cache duration here, or disable cache altogether, by setting this to0
. -
model
The OpenAPI model to use. See the OpenAI documentation for a description of the models. -
max_tokens
The maximum number of tokens to generate for each fix. A helpful rule of thumb is that one token generally corresponds to ~4 characters of text for common English text. This translates to roughly ¾ of a word (so the default 200 tokens ~= 150 words). -
temperature
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
Customising the prompt
If you wish to customise the prompt that is being sent to the OpenAI client to get your fix, you can publish the blade view, and change it as desired:
php artisan vendor:publish --provider="Nanos\OpenaiExceptions\OpenAiSolutionServiceProvider" --tag="views"
then customise resources/views/vendor/nanos/openai-exceptions/prompt.blade.php
Any thoughts?
Particularly since this is my first composer package, I'd love if you have some feedback! Anything I got wrong? Please let me know in the comments below.