When creating a Docker Image you might want to make it easily accessible to you and the world. Most of us have been used to GitHub to share code and, nowadays, DockerHub to make our Docker images publicly available.
We used to be able to connect our GitHub and DockerHub accounts and have DockerHub listen to changed in our code to rebuild and publish the new Docker images. Unfortunately they have changed their model. To do this nowadays a premium subscription is required.
We can still rebuild and push all the changes from our local system to DockerHub but this is tiresome and makes it a bit more complicated to keep an eye on the history of our image. Luckily we can use GitHub actions to do those tasks for us when we push changes to the repository.
Let's start by setting up a Git repository and connecting it to our GitHub account to upload all the data. First in the computer terminal we need to create a directory and initialize it as a Git repository.
mkdir calculate-pi-image
cd calculate-pi-image
git init
Next we need to log in to the GitHub account and start a new repository, we can call it whatever we want, let's say calculate-pi.
In the terminal we can now connect our local directory to our remote GitHub repository.
git remote add origin git@github.com:DennisdeBest/calculate-pi.git
git branch -M main
Let's add a basic Dockerfile
and push it to the remote repository.
echo "FROM alpine:3.15" > Dockerfile
git add . && git commit -m "Init"
git push --set-upstream origin main
Now this file is available on the public GitHub repository, but it is not yet a built Docker image and not at all available on DockerHub yet.
We now need to set up a DockerHub account. We need to set, and keep in mind, the DockerID.
This will be the base of the name that will be needed for other people to get our images.
We can now build our Docker images locally and push them to DockerHub. So let's build our basic image.
The image needs to get a name with [dockerId]/[image-name]:[tag]
, In my case this will be debst/calculate-pi:latest
docker build -f Dockerfile . -t debst/calculate-pi:latest
We can now push this image to make it available on our DockerHub account. First we need to make sure we are logged in. It will ask for our Docker ID and password.
docker login
We can than push our image.
docker push debst/calculate-pi:latest
This now works and the image is available. However, these are quite a few actions that we will have to go through on every update. Let's make GitHub do this for us every time we push new updates to the repository.
As I mentioned at the start of this article, it used to be easy to tell DockerHub to keep an eye on any changes in the public GitHub repository and...
]]>PHP 8.1 added Enums, this allows us to set a defined number of possible values for an array. It is a very interesting way of making sure only the values we want can be to a specific value.
So I started to play around with them, A while back and ran into a few issues with Doctrine and the Symfony FormType.
Let's dive into it !
I wrote a small program to generate browser screen shots when given a certain configuration. This uses browsershot. I wanted a web page with a form for all the different configuration options, one of them is the output type.
So I created an Enum FileType
:
<?php
namespace App\Enum;
enum FileType: string
{
case PNG = 'image/png';
case JPEG = 'image/jpeg';
case PDF = 'application/pdf';
}
There are 3 cases that have a value, this is what is called a Backed Enum. You can access these values on an Enum by simply calling the value property.
$fileType = FileType::PNG;
$mimeType = $fileType->value; // 'image/png'
With this I wanted to add some simple functions to check what the file extension will be when a certain type is used and to check if the selected value is an image.
enum FileType: string
{
...
public static function isImage(?self $value): bool
{
return match ($value) {
self::PDF, null => false,
self::PNG, self::JPEG => true,
};
}
public static function getExtension(?self $value): ?string
{
return match ($value) {
self::PDF => 'pdf',
self::PNG => 'png',
self::JPEG => 'jpeg',
};
}
}
I can now use these easily in my Symfony Controller to set the filename when downloading the file.
#[Route('/shot/{shot}/file', name: 'shot_file')]
public function getShotFile(Shot $shot, Request $request): StreamedResponse
{
...
$fileType = $configuration->getFileType();
$mimeType = $fileType->value;
$extension = FileType::getExtension($fileType);
$filename = 'browserShot-' . (new \DateTime())->getTimestamp() . '.' . $extension;
...
}
And in my Twig templates.
{% if shotConfiguration.isImage %}
<img src="{{ shot.base64 }}">
{% else %}
<iframe src="{{ shot.base64 }}"></iframe>
{% endif %}
It is nice to be able to easily access these functions on anything that will use this Enum as a property. The next thing I needed was an easy way to get all the values and the cases of an Enum for error messages, so I added a few more functions.
public static function getCases(): array
{
$cases = self::cases();
return array_map(static fn(UnitEnum $case) => $case->name, $cases);
}
public static function getValues(): array
{
$cases = self::cases();
return array_map(static fn(UnitEnum $case) => $case->value, $cases);
}
Now in my Symfony command I can add an error if the fileType that is being set is not part of the available ones.
$io->error(sprintf('The file type you provided, %s, is not part of the available file types : [%s]', $fileTypeInput, implode(', ',FileType::getValues())));
Or the cases if that is what the command is expecting.
->addOption('format', null, InputOption::VALUE_OPTIONAL, 'Set the window to a format, allowed formats are ' . implode(', ',PaperFormat::getCases()))
With this I now have a nice command to generate the screenshots, that only accepts...
]]>In the previous article, I added some simple CSS animations to show how the separation of my assets works. Now that that is en place, let's dive into JavaScript a bit more and create an even better typewriter animation based on your input !
To make our CSS and JavaScript interact with this blog page, we need to set up some HTML elements, as I use the Grav content editor I need to type in some particular elements.
<span class="typewriter-input-title">Add some input !</span>
<label for="typewriter-input-data">Input</label>
<textarea id='typewriter-input-data'>Default input data</textarea>
<label for="delay-input">Delay <small>(ms)</small></label>
<input type="number" id="delay-input" name="speed" value="50"/>
<br>
<button class="type-btn" id="type-btn">Type !</button>
<button class="type-btn" id="cancel-btn" disabled>Cancel</button>
<div id="typewriter-errors" class="banner"></div>
<div id="typewriter-output" class="typewriter-output"></div>
I now have :
First, I'll get all the elements I need :
export default class Typewriter {
private typeWriterInput: HTMLTextAreaElement
private typeWriterOutput: HTMLElement
private typeButton: HTMLButtonElement;
private delayInput: HTMLInputElement;
private errorElement: HTMLUListElement;
private cancelButton: HTMLButtonElement;
constructor() {
this.typeWriterInput = document.getElementById('typewriter-input-data') as HTMLTextAreaElement
this.typeWriterOutput = document.getElementById('typewriter-output') as HTMLElement
this.typeButton = document.getElementById('type-btn') as HTMLButtonElement
this.cancelButton = document.getElementById('cancel-btn') as HTMLButtonElement
this.delayInput = document.getElementById('delay-input') as HTMLInputElement
this.errorElement = document.getElementById('typewriter-errors') as HTMLUListElement
}
}
Next we need to get something happening as soon as the Type !
button is clicked, in the constructor I added the event listener that will call another function of my class, typeEvent
:
export default class Typewriter {
constructor() {
//...
this.typeButton.addEventListener('click', () => {
this.typeEvent()
})
//...
}
typeEvent() {
this.resetTypewriter()
let delay = <any>this.delayInput.value as number;
let errors: string[] = [];
if (delay < 10) {
errors.push("Delay has to be 10 or more")
}
const text =...
]]>
“Most good programmers do programming not because they expect to get paid or get adulation by the public, but because it is fun to program.” Linus Torvalds
]]>As I have added TypeScript and WebPack encore to Grav recently I wanted to find a way to add CSS and JavaScript only to certain pages.
So far I generated a assets.html
page that would contain the links to the JavaScript and CSS files generated by
WebPack :
<script defer src="/user/themes/lingonberry-custom/build/app/app.js"></script>
<link href="/user/themes/lingonberry-custom/build/app/app.css" rel="stylesheet">
To generate different files I changed the webpack.config.js
file. I found most of the information in
the Symfony documentation.
However, as I do not really like repeating myself I created a function :
function generateBaseConfig(name, buildPath, inputFile, outputFile, cleanOutput = true) {
let itemBuildPath = path.join(BUILD_PATH_BASE, name);
if (buildPath) {
itemBuildPath = path.join(BUILD_PATH_BASE, buildPath);
}
if (!inputFile) {
inputFile = `${name}.ts`
}
if (!outputFile) {
outputFile = 'assets.html'
}
let config = Encore
// directory where compiled assets will be stored
.setOutputPath(itemBuildPath)
// public path used by the web server to access the output path
.setPublicPath(path.join('/', itemBuildPath))
.addEntry(name, `./${path.join(PATH, 'js', inputFile)}`)
// When enabled, Webpack "splits" your files into smaller pieces for greater optimization.
//.splitEntryChunks()
// will require an extra script tag for runtime.js
// but, you probably want this, unless you're building a single-page app
//.enableSingleRuntimeChunk()
.disableSingleRuntimeChunk()
.enableSourceMaps(!Encore.isProduction())
// enables hashed filenames (e.g. app.abc123.css)
.enableVersioning(Encore.isProduction())
.addPlugin(new CompressionPlugin())
// .addPlugin(new MiniCssExtractPlugin())
.addPlugin(new HtmlWebpackPlugin({
inject: false,
filename: outputFile,
publicPath: path.join('/', itemBuildPath),
scriptLoading: 'defer',
templateContent: ({htmlWebpackPlugin}) => `
${htmlWebpackPlugin.tags.headTags}
`
}))
// enables @babel/preset-env polyfills
.configureBabelPresetEnv((config) => {
config.useBuiltIns = 'usage';
config.corejs = 3;
})
.enableSassLoader()
.enableTypeScriptLoader();
if (cleanOutput) {
config.cleanupOutputBeforeBuild()
}
return config.getWebpackConfig()
}
So now to add multiple inputs/outputs I only have to add a couple of lines:
const PATH = path.join('user', 'themes', 'lingonberry-custom')
const BUILD_PATH_BASE = path.join(PATH, 'build')
const baseEncoreConfiguration = generateBaseConfig('app');
baseEncoreConfiguration.name = 'baseEncoreConfiguration'
Encore.reset();
const animationsConfiguration = generateBaseConfig('animations');
animationsConfiguration.name = 'animationsConfiguration'
module.exports = [baseEncoreConfiguration, animationsConfiguration];
If you only add a name when calling the generateBaseConfig
function the input will be
from {{themeName}}/js/{{name}}.ts
and the output will be {{themeName}}/build/{{name}}/assets.html
. so in the
previous example I had the following files :
user/themes/lingonberry-custom
└── js
├── animations.ts
└── app.ts
Which generated the following output :
user/themes/lingonberry-custom
├── build
├── animations
│ ├── animations.css
│ ├── animations.js
│ ├── animations.js.gz
│ ├── assets.html
│ ├── assets.html.gz
│ ├── entrypoints.json
│ ├── manifest.json
│ └── manifest.json.gz
├── app
│ ├── app.css
│ ├── app.css.gz
│ ├── app.js
│ ├── app.js.gz
│ ├── assets.html
│ ├── assets.html.gz
│ ├── entrypoints.json
│ ├── fonts
│ ├── images
│ ├── manifest.json
│ └── manifest.json.gz
├── entrypoints.json
├── manifest.json
└── manifest.json.gz
Each file input has a separate output directory. Without this the Encore cleanupOutputBeforeBuild
function would
always empty the main build directory. As the builds run in parallel there would always only be one left in the end,
the others deleted.
Now that they are generated the output files need to...
]]>We are all used to seeing a scroll bar on all the pages we visit daily on the internet.
They are useful and explicit. This is very handy but not enjoyable.
For a while now all major Browsers accept some CSS Pseudo elements
to customise to wat the browser's scroller looks. To have some more information you can check it out on Can I Use.
As I have a few bits of code that will scroll on the X axis, and the home page that scrolls on the Y axis I wanted to make this a little more, appropriate, with the rest of the website.
To get this done I have added some CSS
to my theme.
If you have seen my previous article about adding TypeScript and SCSS to Grav you will just have to import another script in your main SCSS. Anywhere else you will need to add the following CSS :
/* components/sidebar.scss */
@use '../variables' as *;
/* width */
::-webkit-scrollbar {
width: 8px;
height: 8px;
}
/* Track */
::-webkit-scrollbar-track {
background: $bg-color;
outline: $primary-color-dark solid 4px;
}
/* Handle */
::-webkit-scrollbar-thumb {
background: $primary-color;
outline: $primary-color-dark;
border-radius: 50vh;
transition: 1s ease-in-out;
}
/* Handle on hover */
::-webkit-scrollbar-thumb:hover {
background: $primary-color-dark;
border: 1px solid $primary-color;
}
When writing this post, firefox does not support these CSS pseudo elements. It does, however, support a property since version 64 that will get us close to what we have on the other browsers. I added this at the end of my previous file :
/* Firefox properties */
* {
scrollbar-color: $primary-color $bg-color;
}
The colors are now correct but unfortunately we can not change the border radius for now.
In my case this file is inside my components directory and called sidebar.scss
it imports a SCSS file containing variables from its parent directory :
$bg-color: #f1f1f1;
$bg-color-dark: darken($bg-color, 15%);
$primary-color: #ff6a1e;
$primary-color-dark: darken($primary-color, 15%);
This will then be included in my main SCSS file to make sure it will be added to my main HTML template thanks to the WebPack Encore build.
@use 'components/scrollbar';
Thanks to this my scrollbars will now be a lot closer to the style of my website :
Sometimes, while writing some code, I can not keep track of time and end up not really knowing the time I have spent on it.
So to keep an eye on this I can make sure to look at my watch and writing down the starting time but this is something I would not be able to do all the time. It would also not create a log of what I have done in the past.
To keep track of time I made a sort of "stopwatch" in a bash script. As soon as I start it with the name of the "thing" I am doing it will keep showing me the time I have spent and as soon as I stop it will write that to a log file.
So I started creating a new bash script with my localScript bash script creator.
localScript timer
#!/usr/bin/env bash
if [[ -z "$1" ]]; then
echo "You need to set a project name as the first variable"
exit 0
fi
PROJECT=$1
FILENAME='time.log'
START_TIME=$SECONDS
There I set the name of the log file I will write into, make sure a project name is defined and start the timer.
The script would need to keep running until I stopped it and, just before ending, it would need to write to a file. In order to get this working I added a while true
loop :
while true
do
DURATION=$(( SECONDS - $START_TIME ))
echo "$DURATION"
sleep 10
done
Then I added a trap command to get the script to write to a log file when stopped :
function writeLog()
{
echo "$PROJECT date: $(date +'%d-%m-%Y') / user: $USER / time: $DURATION" >> $FILENAME
}
trap writeLog EXIT
Now when running the timer "test project"
script you will see this in the Command Line :
timer "test project"
test project time 0
test project time 10
test project time 20
As soon as you exit the script with CTRL+C
it will write to the time.log file :
test project date: 18-03-2021 / user: dennis / time: 12421
It sets the date, the username and the time spent on the project.
This was all good, but I wanted to make the log a little more readable and write the time from just seconds into a formatted string, so I added a function :
function secondsToTimeString()
{
echo "$(($1/3600))h:$(($1%3600/60))m:$(($1%60))s"
}
I also did not want it to write on many lines, so I added the -ne
option to my echo
:
echo -ne "$PROJECT time $(secondsToTimeString $DURATION) \r"
This now works pretty well and can be attached to other commands to keep track of time. Here is the full script :
#!/usr/bin/env bash
if [[ -z "$1" ]]; then
echo "You need to set a project name as the first variable"
exit 0
fi
PROJECT=$1
FILENAME='time.log'
START_TIME=$SECONDS
function secondsToTimeString()
{
echo "$(($1/3600))h:$(($1%3600/60))m:$(($1%60))s"
}
function writeLog()
{
echo "$PROJECT date: $(date +'%d-%m-%Y') / user: $USER / time: $(secondsToTimeString $DURATION) [$DURATION]" >> $FILENAME
}
trap writeLog EXIT
while...
]]>
I started this Grav Blog in 2016. It was a nice and easy way to get started and to set everything up. I got the LingonBerry theme and only changed a few lines in its CSS code to change some colors. This worked fine and was up for years.
Recently I wanted to get back to my blog to add some more content, and so I looked through the code that was in place.
The theme just uses a plain CSS
file and a plain JS
file for its style and user interaction.
This is a fine base but, over the last few years, I moved all my JavaScript to TypeScript and the CSS to SCSS.
This makes the development a bit easier and more secure but as the browser does not understand it, it needs to be 'translated' back to CSS and JS.
I got used to WebPack Encore taking care of this in my Symfony projects, so I started having a look at how to implement this in Grav. Reading Documentation I installed it with yarn :
yarn add @symfony/webpack-encore --dev
When installing this in Symfony projects with flex the configuration files will be generated but here this was not the case. So I added my own webpack.config.js
file at the root of my project :
const Encore = require('@symfony/webpack-encore');
const path = require('path');
const CompressionPlugin = require('compression-webpack-plugin');
const HtmlWebpackPlugin = require('html-webpack-plugin');
// Manually configure the runtime environment if not already configured yet by the "encore" command.
// It's useful when you use tools that rely on webpack.config.js file.
if (!Encore.isRuntimeEnvironmentConfigured()) {
Encore.configureRuntimeEnvironment(process.env.NODE_ENV || 'dev');
}
const PATH = path.join('user', 'themes', 'lingonberry-custom')
const BUILD_PATH = path.join(PATH, 'build')
Encore
// directory where compiled assets will be stored
.setOutputPath(BUILD_PATH)
// public path used by the web server to access the output path
.setPublicPath(path.join('/', BUILD_PATH))
.addEntry('app', path.join('./', PATH, 'js', 'app.ts'))
.disableSingleRuntimeChunk()
.cleanupOutputBeforeBuild()
.enableSourceMaps(!Encore.isProduction())
// enables hashed filenames (e.g. app.abc123.css)
.enableVersioning(Encore.isProduction())
.addPlugin(new CompressionPlugin())
// .addPlugin(new MiniCssExtractPlugin())
.addPlugin(new HtmlWebpackPlugin({
inject: false,
filename: 'assets.html',
publicPath: path.join('/', BUILD_PATH),
scriptLoading: 'defer',
templateContent: ({htmlWebpackPlugin}) => `
${htmlWebpackPlugin.tags.headTags}
`
}
))
// enables @babel/preset-env polyfills
.configureBabelPresetEnv((config) => {
config.useBuiltIns = 'usage';
config.corejs = 3;
})
.enableSassLoader()
.enableTypeScriptLoader()
module.exports = Encore.getWebpackConfig();
I imported some plugins for the SASS and TS compilation and a few more we will see after, then I added some variables :
const PATH = path.join('user', 'themes', 'lingonberry-custom')
const BUILD_PATH = path.join(PATH, 'build')
The theme for my website being located inside user/themes/lingonberry-custom I wanted to have the path of my theme in a variable in case I would ever change my theme it would make it easy to update my webpack.config to watch and build the correct files.
You need to tell Webpack where the files are that he needs to watch and where to put them once he has translated them. With Grav the outputPath and the publicPath are nearly identical as everything is in the same root directory (all the files that are served by the frontend server, as well as,...
]]>On one of the sites I have set up with a contact form there started to get quite a few automated spam messages.
To avoid this I could add something like reCaptcha but first I'll try to take care of this on my own and not add more Google stuff to the page.
My form was pretty simple :
public function buildForm(FormBuilderInterface $builder, array $options)
{
$builder
->add('name', TextType::class,
[
'required' => false,
'label' => 'form.contact.name',
'translation_domain' => 'Default'
]
)
->add('email', EmailType::class,
[
'required' => false,
'label' => 'form.contact.email',
'translation_domain' => 'Default',
]
)
->add('url', UrlType::class,
[
'required' => false,
'label' => 'form.contact.url',
'translation_domain' => 'Default'
]
)
->add('message', TextType::class,
[
'required' => false,
'label' => 'form.contact.message',
'translation_domain' => 'Default'
]
);
}
Based on my Message
Entity :
/**
* @ORM\Entity(repositoryClass="App\Repository\MessageRepository")
*/
class Message
{
use TimestampableEntity;
private $id;
private $name;
/**
* @ORM\Column(type="string", length=255)
* @Assert\NotBlank
* @Assert\Email(
* message = "The email '{{ value }}' is not a valid email.",
* mode="strict"
* )
*/
private $email;
private $url;
private $message;
// ...
}
The form gets send with an Ajax request and then gets treated in the Controller :
/**
* @Route("message", name="send_message", options={"expose"=true}, methods={"POST"})
* @param Request $request
* @param TranslatorInterface $translator
* @return JsonResponse
*/
public function sendMessage(Request $request, TranslatorInterface $translator): JsonResponse
{
$messageForm = $this->createForm(MessageType::class, null);
$messageForm->handleRequest($request);
if ($messageForm->isSubmitted() && $messageForm->isValid()) {
/** @var Message $message */
$message = $messageForm->getData();
$this->manager->persist($message);
$this->manager->flush();
return new JsonResponse($translator->trans('form.contact.sent', [], 'Default'));
} else {
$errors = $this->getErrorsFromForm($messageForm);
return new JsonResponse($errors, Response::HTTP_BAD_REQUEST);
}
}
Great so now let's add some stuff to deceive bots !
First off I imagine they recognize a field called email
and are eager to fill it in.
So I want to keep an email field but also a weird random field that will contain the actual email if someone fills in the form correctly.
So I added a property to my Message
entity with the email Assertions, and I removed them from the email field :
class Message
{
public const EMAIL_HIDDEN_INPUT = 'fhjgiz46';
//...
/**
* @ORM\Column(type="string", length=255)
*/
private $email;
/**
* @Assert\NotBlank
* @Assert\Email(
* message = "The email '{{ value }}' is not a valid email.",
* mode="strict"
* )
*/
private $fhjgiz46;
//...
}
This new field is not mapped and will therefore not appear in the database. Next I added it to my FormType
:
public function buildForm(FormBuilderInterface $builder, array $options)
{
$builder
//...
->add('email', EmailType::class,
[
'required' => false,
'attr' => ['class' => 'email']
]
)
->add(Message::EMAIL_HIDDEN_INPUT, EmailType::class,
[
'required' => false,
'label' => 'form.contact.email',
'translation_domain' => 'Default',
]
);
//...
}
I added a class to my honey-pot input to make it invisible to the user with some CSS rules :
input.email {
height: 0;
margin: 0;
text-decoration: none;
border: none;
}
Finally, in the controller I will want to check if the email input
is set, if so this means it was a bot and therefore I do not want to send an...
A lot of people are still used to a website needing to start with www.
but this is not the case anymore. This has created a few issues in the past where I would deploy a site to https://my-site.org
but then when people would try to share the site they would mention it as https://www.my-site.org
.
So I first started looking at how to make the site available on both those URLs. The first thing that came to mind was altering the Traefik
labels attached to the Docker container serving the site.
# docker-compose.yaml
version: '3.7'
services:
nginx:
...
labels:
- "traefik.enable=true"
- "traefik.http.routers.my-site.rule=Host(`my-site.org`) || Host(`www.my-site.org`)"
This works and the site becomes available on both of the URLs, great all done, time to go to bed ...
The problem with that was site crawlers considering the data on the site as duplicated, and they really do not like that. To avoid having this happen we need to make sure that anyone or anything trying to go to a page starting with www.
get automatically, and permanently redirected to the page without that sub-domain.
I first started by running a small nGinx instance in another container labeled with the www.
prefix that would return a 301 to the actual site. This was a bit hacky and started to become annoying to do for every site.
Depending on the Host where the site is registered to I could also sometimes add a redirect to the DNS settings but not always and this would always be a lot of clicking around.
Luckily reading through the updates happening withing Traefik I found something.
I came upon the RedirectRegex middleware section of the documentation. It states that if the incoming request matches a regex we replace the url of the request. So I figured I would give this a shot to get rid of the www subdomain.
There are many ways to add middlewares to your Traefik configuration but as this one will become required more and more often I did not want to add many more labels to every Docker container. I ended up adding it to my dynamic_conf.yaml
file.
http:
middlewares:
redirect-www:
redirectRegex:
regex: "^https?://www\\.(.+)"
replacement: "https://${1}"
permanent: true
There it is ! Anything that starts with http://www. or https://www. will get permanently redirected to https://
http://www.my-site.org => https://my-site.org
So now if I need one of my containers to use this middleware I will have to add another label :
labels:
- "traefik.enable=true"
- "traefik.http.routers.my-site.rule=Host(`my-site.org`) || Host(`www.my-site.org`)"
- "traefik.http.routers.my-site.middlewares=redirect-www@file"
Updates
^https?://www.(.*)
to ^https?://www\\.(.+)
thanks to Navossoc 's comments