[{"content":" Overview 01 02 03 04 05 06 07 08 Overview # The foundation of any project is a solid planning phase, because writing code from the get-go is rarely the best place to start. I had to find a project idea that was both realistic given the timeframe we had, but also fun to build, an idea that I thought was useful, at least to me.\nI needed something small enough to finish, but also rich enough to support the technologies we would keep adding throughout the semester.\nAfter thinking through a few possibilities, including an email sorter, which I am glad I did not choose, and what I now call eru, a backend for a learning-focused content platform, or as I like to call it, a Healthy Doom Scroller, I naturally went with the most interesting choice.\nEarly Scope # And now the real planning begins. So, what does a Healthy Doom Scroller need?\nLet’s break it down with a very primitive user story:\nAs a user, I would like to look at interesting and/or educational content and be able to interact with it somehow, depending on what I think of it.\nAnd what better way to begin than by putting the idea on a “piece of paper”?\nEarly Domain Sketch # This is the early concept I came up with.\nflowchart TD User --\u003e Interaction Interaction --\u003e Content User --\u003e Role Content --\u003e Category Even though this was only a rough sketch, I do not think much will change. It seems fairly straightforward.\na user model a user interaction model a content model some form of categorization and structure Final Thoughts on the First Week # The biggest lesson I took from my previous projects was that a strong semester project does not need to be huge. It needs to be well-scoped and able to evolve. I think choosing a smaller but coherent idea early will make the rest of the semester much more pleasant.\nBack To Overview Next Chapter ","date":"7 April 2026","externalUrl":null,"permalink":"/projects/eru-project/exam-portfolio/planning-phase/","section":"Projects","summary":"","title":"01 - Planning Phase","type":"projects"},{"content":" Overview 01 02 03 04 05 06 07 08 Overview # Once the early project idea was settled, the next step was to set up the project structure. This week was therefore all about laying the foundations and getting the project to a point where it could actually persist data instead of just existing as an idea.\nSetting Up Hibernate # One of the most important classes in my setup at this stage was HibernateConfig, because it acted as the main entry point for the Hibernate configuration. It was the class the rest of the application called whenever it needed an EntityManagerFactory.\npublic static EntityManagerFactory getEntityManagerFactory() { if (emf == null) { synchronized (HibernateConfig.class) { if (emf == null) { emf = HibernateEmfBuilder.build(buildProps()); } } } return emf; } What this does is:\nensure that the application only creates one EntityManagerFactory, which effectively makes it behave like a singleton throughout the application collect the Hibernate settings through buildProps() send those settings to HibernateEmfBuilder return the finished EntityManagerFactory to the rest of the application Another important part of HibernateConfig is that it decides whether the application is running locally or in a deployed environment. In development, I used the create setting, which recreates the schema and clears the database on each run. That made it easier to test and work with the project using what I think of as a “clean slate” principle.\nprivate static Properties buildProps() { Properties props = HibernateBaseProperties.createBase(); props.put(\u0026#34;hibernate.hbm2ddl.auto\u0026#34;, \u0026#34;create\u0026#34;); if (System.getenv(\u0026#34;DEPLOYED\u0026#34;) != null) { setDeployedProperties(props); } else { setDevProperties(props); } return props; } This made the setup easier to reason about, because one class had the responsibility of:\ncreating the base Hibernate configuration choosing the correct database settings preparing everything needed before Hibernate starts I found this especially interesting because, for me, it was a completely new way of working with databases.\nEarly Entity Design for Content # One of the first important entities was Content, because it represents the actual educational material that the whole platform revolves around:\n@Entity @Table(name = \u0026#34;content\u0026#34;) public class Content { @Id @GeneratedValue(strategy = GenerationType.IDENTITY) private Integer id; @Column(nullable = false) private String title; @Column(nullable = false) private String body; @Enumerated(EnumType.STRING) @Column(nullable = false) private ContentType contentType; } This was intentionally simple, but it already captured the basic domain idea:\ncontent has a title content has a body content has a type What made this useful during the week was not just the class itself, but also the annotations around it. This was where I practiced the core JPA mapping concepts:\n@Entity to mark the class as persistent @Table to control the table mapping @Id and @GeneratedValue for the primary key @Column to define database column rules @Enumerated to persist enum values in a readable way, which I will return to later Understanding Hibernate and JPA # This was also the week where I got a much clearer understanding of the difference between Hibernate and JPA.\nJPA is the Java standard for persistence. It defines the annotations and APIs used to map Java objects to database tables. Hibernate is the framework that implements JPA and handles the actual ORM behavior in practice. That distinction helped me understand why I was working with annotations such as @Entity and APIs such as EntityManager, while still relying on Hibernate to handle the actual database communication underneath. Since Hibernate builds on top of JDBC, it also meant I could work at a higher level of abstraction instead of writing raw JDBC code myself.\nThe EntityManager # Another important part of the foundation phase was learning how persistence actually happens in code.\nThis was where EntityManagerFactory and EntityManager really started to make sense:\nEntityManagerFactory belongs to application startup and is expensive to create EntityManager is used for the actual unit of work against the database Inside the DAO layer, I used EntityManager to perform CRUD operations. This also made transactions much more concrete, because writing data meant beginning a transaction, calling persist or merge, and committing it correctly.\nDAOs # I used a DAO-based persistence layer from the beginning instead of pushing database logic directly into controllers. That felt a little heavier at first, but it gave the project a cleaner structure later when services and tests were added.\nOne good example of that is ContentDAO, which became the place where all persistence logic for content was collected:\npublic class ContentDAO implements IDAO\u0026lt;Content, Integer\u0026gt; { private final EntityManagerFactory emf; public ContentDAO() { this(HibernateConfig.getEntityManagerFactory()); } } This was an important design choice because it meant controllers and services did not need to know how Hibernate worked internally. They only needed to call methods such as create, getAll, getById, update, or delete.\nA small CRUD example from the DAO looks like this:\npublic Content create(Content content) { try (EntityManager em = emf.createEntityManager()) { em.getTransaction().begin(); em.persist(content); em.getTransaction().commit(); return content; } } I think this class was useful for my understanding because it made the JPA flow very visible:\ncreate an EntityManager begin a transaction persist or merge the entity commit the transaction Content Type Enum # Another small but meaningful design choice was using an enum for content type:\npublic enum ContentType { FACT, THEORY, QUOTE; } This helped me avoid using random strings throughout the codebase. Instead of storing a free-text type everywhere, I had a limited set of valid values that matched the domain more clearly.\nThat enum was then used directly in the Content entity:\n@Enumerated(EnumType.STRING) @Column(nullable = false) private ContentType contentType; This was a nice design decision for two reasons:\nit made the model more type-safe in Java it stored the enum as readable text in the database instead of as unclear numeric values So ContentDAO and ContentType may look small on their own, but together they show two of the core backend ideas from this phase:\npersistence logic should be separated into dedicated classes domain rules should be represented as clearly as possible in the model JPA Lifecycle States # Screenshot from Jon Bertelsen\u0026rsquo;s teaching video # 03-jpa-basics-entity-lifecycle, taken by the author.\nThis diagram shows the four lifecycle states of a JPA entity and how the EntityManager aka Goblin Banker controls them and delegates these different states to them.\nTransient means the object exists only in Java and is not yet managed or stored in the database. Managed means the object is tracked by the EntityManager, so changes to it can be synchronized with the database. Detached means the object was previously managed, but is no longer being tracked. Removed means the object has been marked for deletion and will be deleted from the database on flush or commit. The diagram helped me understand that JPA does not just save objects directly. Instead, entities move between different states depending on methods such as persist(), find(), merge(), and remove().\nFinal Thoughts on the Second Week # This week was where the project started to feel real. The idea was no longer just something conceptual, because I now had the basic persistence layer, Hibernate configuration, and the first entities in place.\nWhat made this week especially important for me was that I was not just writing code to make something work, but also starting to understand the architectural reasoning behind it. Setting up HibernateConfig, working with EntityManagerFactory and EntityManager, and introducing a DAO layer gave me a much clearer picture of how persistence is structured in a Java backend application.\nPrevious Chapter Next Chapter ","date":"7 April 2026","externalUrl":null,"permalink":"/projects/eru-project/exam-portfolio/jpa-basics/","section":"Projects","summary":"","title":"02 - Laying the Foundation","type":"projects"},{"content":" Overview 01 02 03 04 05 06 07 08 Overview # This week I really had to think about the entity relationships in the database, more so in terms of understanding all of the different relationship types. Do I model this as many-to-many, many-to-one, or perhaps one-to-one? This took unbelievable brainpower, to the point where I had to call in support from Codex to make sure I was doing the right thing.\nUsers and Interactions # I introduced UserInteraction as its own entity rather than storing likes or bookmarks directly on User or Content.\nBy making interaction its own entity, I could represent:\nwhich user made the interaction what content it belongs to what kind of reaction it was when it happened @Entity @Table( name = \u0026#34;user_interactions\u0026#34;, uniqueConstraints = { @UniqueConstraint( name = \u0026#34;uk_user_interaction_user_content\u0026#34;, columnNames = {\u0026#34;user_id\u0026#34;, \u0026#34;content_id\u0026#34;} ) } ) public class UserInteraction { @ManyToOne(fetch = FetchType.LAZY) @JoinColumn(name = \u0026#34;user_id\u0026#34;, nullable = false) private User user; @ManyToOne(fetch = FetchType.LAZY) @JoinColumn(name = \u0026#34;content_id\u0026#34;, nullable = false) private Content content; } Note: I\u0026rsquo;m using FetchType.LAZY here because it avoids loading related data unless it is actually needed.\nFinal Thoughts on the Third Week # This week taught me that relationships are hard, and it\u0026rsquo;s even harder when your job is to fire the correct Cupid arrows at the right entities, because otherwise the result is wrong domain logic, and no one wants that.\nPrevious Chapter Next Chapter ","date":"7 April 2026","externalUrl":null,"permalink":"/projects/eru-project/exam-portfolio/jpa-basics-and-relations/","section":"Projects","summary":"","title":"03 - Relationship Challenges","type":"projects"},{"content":" Overview 01 02 03 04 05 06 07 08 Overview # At this point in the journey, I needed something new, and since we were about to really dive into APIs, I thought there was no better time to actually implement an API into my API. So that is what I decided to do.\nThe idea was simple. My thought process was to eventually have an elaborate button that the user could press, so that whatever content was shown on the screen could be sent to an AI API and expanded on further, whether it was a fact, theory, or quote.\nI obviously needed an AI for this, so I decided to go with OpenAI\u0026rsquo;s API, which turned out to be very affordable. The total cost was less than 30 DKK.\nWhat Was Needed in Order for It to Work # OpenAI client integration a service wrapper for AI functionality a dedicated elaboration route environment-based configuration for the API key How the AI Flow Works # The AI integration is split into small responsibilities instead of being handled in one large class.\nThe flow looks like this:\na request hits the AI route the controller validates the input the service forwards the request the client builds the OpenAI HTTP request the response is parsed and returned as JSON The request itself is simple:\npublic record ElaborateRequestDTO(String title, String body) { } The controller receives that DTO, validates it, and calls the service:\nString explanation = openAiService.elaborateContent( request.title(), request.body() ); In the actual controller, there is also a guard for missing AI configuration and some input validation:\nif (openAiService == null) { throw ApiException.configuration(\u0026#34;AI integration is disabled...\u0026#34;); } ElaborateRequestDTO request = ctx.bodyAsClass(ElaborateRequestDTO.class); if (request == null || request.title() == null || request.title().isBlank() || request.body() == null || request.body().isBlank()) { throw ApiException.badRequest(\u0026#34;Both title and body are required\u0026#34;); } That part mattered because it kept the route safe and predictable. The AI endpoint should fail clearly if the input is invalid, instead of sending broken requests to the external API.\nThe response is also wrapped in a DTO:\npublic record ElaborateResponseDTO(String explanation) { } That made the API response clearer and kept the controller output structured.\nThe actual prompt is built in OpenAiClient:\nString prompt = \u0026#34;\u0026#34;\u0026#34; Explain the following content in a simple, interesting, and educational way. Keep it concise and easy to understand. Title: %s Content: %s \u0026#34;\u0026#34;\u0026#34;.formatted(title, body); That prompt is then sent to OpenAI through a plain HTTP request:\n{ \u0026#34;title\u0026#34;: \u0026#34;Gravity\u0026#34;, \u0026#34;body\u0026#34;: \u0026#34;Gravity is the force that attracts objects toward each other. It explains why objects fall toward the Earth and why planets orbit stars.\u0026#34; } Which produces an output from the OpenAI API:\n{ \u0026#34;explanation\u0026#34;: \u0026#34;Sure!\\n\\n**Gravity** is like an invisible magnet that pulls things toward each other. It’s why if you drop a ball, it falls to the ground instead of flying away. It also keeps the Earth moving around the Sun, and the Moon moving around the Earth. Without gravity, everything in space would just float away! So, gravity is the reason things stick together and stay in place.\u0026#34; } And that is the gist of it.\nFinal Thoughts on the Fourth Week # The main lesson here was that AI was not really the hard part by itself. The real work was designing the flow around it: OpenAI also has a very thorough guide for implementing their API, which made the integration process easier to understand.\nPrevious Chapter Next Chapter ","date":"7 April 2026","externalUrl":null,"permalink":"/projects/eru-project/exam-portfolio/data-integration/","section":"Projects","summary":"","title":"04 - Implementing OpenAI's API","type":"projects"},{"content":" Overview 01 02 03 04 05 06 07 08 Overview # With the AI somewhat in place, the next natural step was to expose the system properly through REST endpoints.\nUntil this stage, most of the work had been about entities, persistence, and relationships. That part was useful, but it still left one important piece missing: the HTTP layer that actually lets a client interact with the system in a structured way. Now, I already have some HTTP experience with the AI implementation, so this shouldn\u0026rsquo;t be too bad.\nWhat I Added # content controller content routes request DTOs response DTOs content service route registration under /api/v1 cleaner startup through ApplicationConfig and DependencyContainer The most important endpoints introduced around this phase were:\nGET /content GET /content/{id} POST /content PUT /content/{id} DELETE /content/{id} This was where REST began to make much more sense to me in practice. Instead of only thinking in terms of methods in Java classes, I had to start thinking in terms of resources, URIs, HTTP methods, and response codes.\nIn eru, content became the resource, while /content/{id} represented one specific piece of content. The same idea later extended naturally to endpoints such as /content/{id}/interactions and /auth/me.\nDTOs and API Boundaries # One of the design decisions I liked most in this phase was keeping the HTTP contract separate from the persistence model. I did not want to expose JPA entities directly, because entities belong to the database model, while DTOs belong to the API model.\nFor content, the response DTO looks like this:\npublic record ContentDTO( Integer id, String title, String body, ContentType contentType, String category, String source, String author, boolean active, LocalDateTime createdAt ) { } That gave me much better control over what the client should actually see.\nIt also became clear that request and response shapes do not always have to be identical. Authentication is a simple example of that:\npublic record RegisterRequestDTO( String firstName, String lastName, String email, String username, String password ) { } public record AuthResponseDTO(String token, Integer userId, String username) { } That is a small but important example of why DTO separation matters. A password belongs in an incoming request, but obviously not in a response.\nRoute Structure # Another important improvement was separating route registration into dedicated route classes. Instead of putting everything in one place, I used Routes as the entry point and then delegated to classes such as ContentRoutes, AuthRoutes, and InteractionRoutes.\nOne example from ContentRoutes looks like this:\nroutes.post(\u0026#34;/content\u0026#34;, contentController::create, AppRole.ADMIN); routes.get(\u0026#34;/content\u0026#34;, contentController::getAll, AppRole.ANYONE); routes.get(\u0026#34;/content/{id}\u0026#34;, contentController::getById, AppRole.ANYONE); routes.put(\u0026#34;/content/{id}\u0026#34;, contentController::update, AppRole.ADMIN); routes.delete(\u0026#34;/content/{id}\u0026#34;, contentController::delete, AppRole.ADMIN); This gave me practical experience with how Javalin maps HTTP methods to route handlers:\nGET for reading resources POST for creating new resources PUT for updating existing resources DELETE for removing resources That sounds simple, but it was actually one of the points where REST started to click for me. The HTTP method itself communicates intent, so the API becomes easier to understand both for me and for whoever might use it.\nI also liked that each route class is responsible for one small area of the API. If I want to change how content behaves, I know where to look, and I do not have to dig through one giant route file.\nRequest Handling and Validation # Another important thing I learned in this phase was how much the Javalin Context object actually does.\nInside the controllers, I used ctx to:\nparse request bodies with ctx.bodyAsClass(...) read path parameters with ctx.pathParam(\u0026quot;id\u0026quot;) read query parameters with ctx.queryParam(...) set status codes with ctx.status(...) return JSON with ctx.json(...) For example, the content list endpoint supports filtering through query parameters:\nString typeParam = ctx.queryParam(\u0026#34;type\u0026#34;); String activeOnlyParam = ctx.queryParam(\u0026#34;activeOnly\u0026#34;); That made the API feel much more practical. Instead of creating separate routes for every variation, I could support:\nGET /api/v1/content GET /api/v1/content?type=FACT GET /api/v1/content?activeOnly=true I also ended up with a validation split that I think makes sense architecturally.\nThe controller deals with HTTP-shaped concerns such as parsing input and extracting parameters, while the service layer handles the actual request validation:\nprivate static void validateRequest(ContentRequestDTO request) { if (request == null) { throw ApiException.badRequest(\u0026#34;Request body is required\u0026#34;); } if (request.title() == null || request.title().isBlank()) { throw ApiException.badRequest(\u0026#34;Title is required\u0026#34;); } if (request.body() == null || request.body().isBlank()) { throw ApiException.badRequest(\u0026#34;Body is required\u0026#34;); } if (request.contentType() == null) { throw ApiException.badRequest(\u0026#34;Content type is required\u0026#34;); } } I liked this split because it kept the controllers thinner while still making the validation rules explicit.\nCleaning Up the App Structure # This phase also turned into a bit of a refactor.\nAs the project grew, I did not want Main to become a long list of object creation, route wiring, and configuration code. So I moved those responsibilities into ApplicationConfig and DependencyContainer.\nThat meant:\nMain stays small and only starts the application DependencyContainer handles object creation and wiring ApplicationConfig handles Javalin setup, routes, logging, and exception handling I think this was one of the more valuable decisions of the phase, because it made the whole project feel more intentional and much easier to extend.\nFinal Thoughts of the Fifth Week # What I liked most about this week was that I finally got rid of the 200+ lines worth of code inside of Main, and i\u0026rsquo;m quite the structured person myself, so it felt good, even though it was somewhat of a rookie mistake to begin with. Nonetheless, DTOs, controllers, routes, exception handling, and the cleaned-up app configuration were in place, eru started feeling much more coherent and much closer to what I would describe as a proper REST API.\nPrevious Chapter Next Chapter ","date":"7 April 2026","externalUrl":null,"permalink":"/projects/eru-project/exam-portfolio/rest-api/","section":"Projects","summary":"","title":"05 - eru = REST API","type":"projects"},{"content":" What is eru? # In its essence, eru is a project designed to make your doom-scrolling worth remembering. The idea is to create a platform that can deliver memorable quotes, scientific theories, and interesting facts without the need for a boring lecture.\nThe one idea to rule them all # The name eru isn\u0026rsquo;t from this earth at all. It originates from J.R.R. Tolkien\u0026rsquo;s Lord of the Rings universe, where Eru is the omniscient god of everything, with Eru literally meaning \u0026ldquo;The One.\u0026rdquo; While the project isn\u0026rsquo;t an all-knowing perfect being, maybe it can still help some people throw some bad habits into Mount Doom.\nWhy did I need eru? # Ever since its beginning, the Internet has been a platform where our attention is a hot commodity. Modern social media algorithms and marketing strategies are designed to keep us in the endless loop of scrolling. That got me to ask myself the following questions and start thinking about a solution.\nWhen was the last time you went to the bathroom without your phone? When was the last time you sat on any public transport without scrolling needlessly? When was the last time you doom-scrolled and actually remembered what you saw? On average, I spend 2-3 hours on my phone a day, which roughly equates to a thousand hours a year of phone time. What if all this time was spent differently?\nThis is where eru comes in. I wanted to make something to counteract the negative effect that doom-scrolling had on me. So what if I could actually learn something while scrolling? Let\u0026rsquo;s be honest, it would be a lot easier to have a healthy media platform than it would be to kick the phone addiction altogether.\nGo to Exam Portfolio ","date":"23 March 2026","externalUrl":null,"permalink":"/projects/eru-project/what-is-eru/","section":"Projects","summary":"","title":"Welcome to eru","type":"projects"},{"content":" Overview 01 02 03 04 05 06 07 08 Overview # It\u0026rsquo;s finally time to do some tests, at this point, the backend had authentication, authorization, and protected routes, which meant there were now many more ways for things to go wrong. So to make sure things don\u0026rsquo;t go wrong, you make them go wrong intentionally with tests.\nWhat I Added # unit tests for auth, content, and interaction services integration tests that start the API against a real PostgreSQL test database tests for register, login, logout, protected routes, and filtered interaction endpoints Rest Assured-based API tests with Hamcrest assertions helper factory for spinning up the test application I was kind of experimenting so I ended up using two styles of API testing in this project:\nJava\u0026rsquo;s built-in HttpClient together with JUnit, Jackson, and Testcontainers Rest Assured for more expressive endpoint and response assertions Testing the API from the Outside # I chose to boot the application against a real PostgreSQL container instead of mocking the whole stack.\n@Testcontainers(disabledWithoutDocker = true) class AuthContentRoutesTest { @Container private static final PostgreSQLContainer\u0026lt;?\u0026gt; POSTGRES = new PostgreSQLContainer\u0026lt;\u0026gt;(\u0026#34;postgres:16-alpine\u0026#34;) .withDatabaseName(\u0026#34;eru_test\u0026#34;) .withUsername(\u0026#34;postgres\u0026#34;) .withPassword(\u0026#34;postgres\u0026#34;); } That gave me much better confidence in:\nroute behavior JSON handling persistence security behavior The first integration tests send actual HTTP requests to the running API with HttpClient:\nHttpResponse\u0026lt;String\u0026gt; registerResponse = sendJsonRequest( \u0026#34;POST\u0026#34;, \u0026#34;/auth/register\u0026#34;, null, Map.of( \u0026#34;firstName\u0026#34;, \u0026#34;Student\u0026#34;, \u0026#34;lastName\u0026#34;, \u0026#34;One\u0026#34;, \u0026#34;email\u0026#34;, \u0026#34;student1@example.com\u0026#34;, \u0026#34;username\u0026#34;, \u0026#34;student1\u0026#34;, \u0026#34;password\u0026#34;, \u0026#34;secret123\u0026#34; ) ); I could verify the full request flow from HTTP request to controller to service to database and back again.\nLater on, I also added Rest Assured tests, which made the endpoint assertions much cleaner.\ngiven() .contentType(ContentType.JSON) .body(registerPayload(\u0026#34;student1\u0026#34;, \u0026#34;student1@example.com\u0026#34;)) .when() .post(\u0026#34;/auth/register\u0026#34;) .then() .statusCode(201) .body(\u0026#34;username\u0026#34;, equalTo(\u0026#34;student1\u0026#34;)) .body(\u0026#34;token\u0026#34;, not(blankOrNullString())); This was especially useful for checking CRUD-like behavior and status codes:\n201 when a user or content item is created 200 when data is returned correctly 204 when logout succeeds 401 when a protected route is called without a valid token 404 when a requested resource does not exist This also gave me a much better feel for why Rest Assured is useful in API testing. It lets the test read almost like a description of the HTTP contract, especially when combined with Hamcrest matchers.\nTesting Authentication and Authorization # For protected endpoints, the tests send the JWT through the Authorization header as a Bearer token. That let me verify that the access rules in the routes actually worked in practice.\nIn the Rest Assured version, this becomes very compact:\ngiven() .auth().oauth2(token) .queryParam(\u0026#34;reactionType\u0026#34;, \u0026#34;BOOKMARK\u0026#34;) .when() .get(\u0026#34;/interactions/me\u0026#34;) .then() .statusCode(200) .body(\u0026#34;$\u0026#34;, hasSize(1)); One example is the logout flow. A user first receives a token through registration or login, and after logout that same token should no longer be accepted:\nHttpResponse\u0026lt;String\u0026gt; logoutResponse = sendRequest(\u0026#34;POST\u0026#34;, \u0026#34;/auth/logout\u0026#34;, token, null); assertEquals(204, logoutResponse.statusCode()); HttpResponse\u0026lt;String\u0026gt; meResponse = sendRequest(\u0026#34;GET\u0026#34;, \u0026#34;/auth/me\u0026#34;, token, null); assertEquals(401, meResponse.statusCode()); I liked this kind of test because it checks behavior that matters from the outside. It is not only about whether some internal method returns true or false, but whether the API really enforces the security rules it claims to have.\nFinal Thoughts for the Sixth Week # Working with both HttpClient and Rest Assured also made the difference clear to me. HttpClient is perfectly fine for full integration tests, but Rest Assured makes the intent of API assertions much clearer and more concise. That made it easier to connect the implementation to the actual learning goals around REST API testing.\nPrevious Chapter Next Chapter ","date":"7 April 2026","externalUrl":null,"permalink":"/projects/eru-project/exam-portfolio/rest-and-test/","section":"Projects","summary":"","title":"06 - REST and Test","type":"projects"},{"content":" Overview 01 02 03 04 05 06 07 08 Overview # Security was the phase where eru started feeling like an application rather than just a content API.\nUp until this point, the project had structure, persistence, and routes. But now I had to think much more seriously about access:\nwho should be able to do what what should be public what should require a token what should be admin-only This was a very different type of work than the earlier persistence and REST phases. The problem was no longer only “can the route do the thing?” but also “should this user be allowed to do the thing at all?”\nWhat I found interesting here is that the hard part was not learning what a JWT is in theory. The hard part was fitting authentication and authorization into an existing codebase without turning the whole project into “security code everywhere”.\nWhat I Added # JWT utility auth service auth controller and auth routes authenticated user context role-based route protection password hashing and password verification with BCrypt Later on, I also extended this part of the system with:\nimproved registration fields logout a current-user interaction overview JWT # One of the things I liked about JWT fairly quickly was that it gave me a compact way to carry both identity and role information in the same token.\nreturn JWT.create() .withIssuer(issuer) .withIssuedAt(now) .withExpiresAt(expiresAt) .withSubject(user.getUsername()) .withClaim(\u0026#34;userId\u0026#34;, user.getId()) .withArrayClaim(\u0026#34;roles\u0026#34;, roles.toArray(new String[0])) .sign(algorithm); What mattered to me here was not only that the token worked, but that it fit the shape of the application well. I did not need server-side sessions, and I did not need every request to hit the database just to understand who the caller was.\nInstead, each request can carry its own proof of identity through the Authorization header, and the server can decide from that token who the user is and what roles they have.\nFor me, the useful part was thinking of the token as a small security package:\nthe header describes the token type and signing algorithm the payload contains claims such as subject, userId, and roles the signature is what makes the token verifiable and tamper-resistant I did not build refresh tokens in this version of eru, but I did add token revocation on logout. That was enough to make the flow feel much more like real application security instead of just a classroom example.\nTwo Different Security Questions # One of the biggest things that helped me this week was to stop thinking about “security” as one single operation.\nWhat I really had in front of me were two different questions:\nfirst: is this request actually coming from a valid logged-in user? second: even if it is, should that user be allowed to do this specific thing? That sounds obvious when written out, but it made a huge difference in the code. Once I separated those two concerns mentally, the implementation also became easier to structure.\nThe first step is simply about trust. Can I trust the token enough to treat this request as authenticated? If yes, I turn that into an internal user object:\nAuthenticatedUserDTO authenticatedUser = validateAndGetUserFromToken(ctx); ctx.attribute(CTX_USER_KEY, authenticatedUser); That way, the rest of the request no longer has to care about the raw token string.\nThe second step is about permission. At that point, the application already knows:\nwho the user is which roles that user has which roles are allowed for the route So the question shifts from “is this token valid?” to “does this user belong here?”\nI liked this split because it made the flow feel much more natural:\nestablish identity compare identity against route permissions only then allow the controller to run It also made the code much easier to reason about, because I no longer had to mix token verification and permission decisions into the same mental bucket.\nWhere Security Happens Before a Route Runs # Another thing I liked in this phase was that the security logic could live before the business logic, instead of being repeated inside every controller.\nIn ApplicationConfig, I use Javalin’s access-manager flow so the request passes through authentication and authorization before the actual handler runs:\nLegacyAccessManagerKt.legacyAccessManager(app, (handler, ctx, routeRoles) -\u0026gt; { Set\u0026lt;String\u0026gt; allowedRoles = routeRoles.stream() .map(role -\u0026gt; role.toString().toUpperCase()) .collect(Collectors.toSet()); ctx.attribute(\u0026#34;allowed_roles\u0026#34;, allowedRoles); authController.authenticate(ctx); authController.authorize(ctx); handler.handle(ctx); }); What I like about that is that it puts the security rules at the edge of the request flow.\nIn practice, that means:\nroutes declare which roles are allowed authentication validates the token authorization checks the user against the allowed roles only then does the actual controller logic run So by the time a controller gets the request, the security decision has already been made. That keeps the controller focused on the job it is actually supposed to do.\nLetting the Routes Declare Access Intent # One design choice I ended up appreciating a lot was making the access rules visible directly in the route definitions:\nroutes.post(\u0026#34;/auth/register\u0026#34;, authController::register, AppRole.ANYONE); routes.post(\u0026#34;/auth/login\u0026#34;, authController::login, AppRole.ANYONE); routes.get(\u0026#34;/auth/me\u0026#34;, authController::me, AppRole.USER); routes.post(\u0026#34;/auth/logout\u0026#34;, authController::logout, AppRole.USER); routes.post(\u0026#34;/auth/roles\u0026#34;, authController::addRole, AppRole.ADMIN); That made the system much easier to scan. I did not have to open a controller and then dig through service code just to understand whether an endpoint was public, authenticated, or admin-only.\nThe same pattern also made the rest of the API feel more coherent:\npublic users can browse content authenticated users can use personal endpoints and interactions admins can create, update, and delete content That was important to me because it made the backend easier to defend. It is one thing to build endpoints. It is another thing to explain why the access model makes sense.\nI also think it made the role model easier to read. The route itself communicates the access intent, which feels much cleaner than hiding those decisions deeper in the stack.\nReusing the Verified User Throughout the Request # Another small thing that turned out to matter a lot was storing the authenticated user on the request context once the token had already been verified.\nThat made endpoints such as /auth/me and /interactions/me much cleaner, because they could work with an already verified internal user instead of reopening the token logic every time:\nAuthenticatedUserDTO authenticatedUser = getAuthenticatedUser(ctx); ctx.status(200).json(CurrentUserDTO.fromAuthenticatedUser(authenticatedUser)); That may look like a small thing, but I think it was one of the cleaner decisions in the whole security phase. The token gets interpreted once near the security layer, and from there the rest of the request can work with a much simpler internal representation.\nHandling Credentials Without Storing Plain Text Passwords # Another security-related part that became much more meaningful during this phase was password handling.\nIn eru, passwords are not stored in plain text. They are hashed with BCrypt when the user is created:\nthis.passwordHash = BCrypt.hashpw(password, BCrypt.gensalt()); and later verified like this:\nreturn pw != null \u0026amp;\u0026amp; passwordHash != null \u0026amp;\u0026amp; BCrypt.checkpw(pw, passwordHash); This was important because it made the reasoning around authentication much clearer. The system should never need to know the original password after registration. It only needs to store a secure hash and compare future login attempts against that hash.\nThis also helps defend against some of the obvious password-related problems. If the database were exposed, stored password hashes are far safer than plain text values, and BCrypt\u0026rsquo;s salt generation makes simple rainbow-table attacks much less effective.\nI did not implement password reset, password change, or brute-force protection in this version, so those are still unfinished areas. But using BCrypt was still an important step toward handling credentials in a more realistic and secure way.\nFinal Thoughts on the Seventh Week # The main lesson here was that good security design is not only about picking JWT or BCrypt. It is also about where those concerns live in the architecture. The cleaner that integration is, the easier the system is to understand, test, and extend later.\nPrevious Chapter Next Chapter ","date":"7 April 2026","externalUrl":null,"permalink":"/projects/eru-project/exam-portfolio/security/","section":"Projects","summary":"","title":"07 - Security","type":"projects"},{"content":" Overview 01 02 03 04 05 06 07 08 Overview # The beginning of the end is here, and eru can finally come to life through the full pipeline.\nThe deployment is officially live on https://eru-api.dk/api/v1/routes\nThe Pipeline Picture # Let me introduce you to the full CI/CD pipeline process using GitHub Actions and Docker\nSo looking at this picture, how did the deployment actually work?\nThe developer (me) writes code locally. I then push the code to GitHub The code is stored in the GitHub repository, which also contains the Dockerfile (which defines how the Java application is packaged into a Docker container and how that container should run) and the GitHub Actions workflow (which automates publishing the application image when stuff gets pushed to main). A push to the main branch triggers GitHub Actions GitHub Actions builds the project, runs the tests, and prepares the deployment process, unless the test/s fails. GitHub Actions then builds and pushes a Docker image of the application. That new Docker image is then pushed to Docker Hub (all of this is connected to my Docker Hub username cphds), as cphds/eru:latest The server later performs a Docker image pull from Docker Hub instead of receiving code directly from GitHub Watchtower, running on the DigitalOcean server, monitors the image and checks wether the :latest tag has changed When a newer image is available, Watchtower pulls the updated image and replaces the running container, thereby deploying the new version of the Javalin API Caddy acts as a reverse proxy in front of the running container and exposes the API securely over HTTPS on the project domain Seems easy right? It wasn\u0026rsquo;t.\ndocker-compose.yml # Existing on the droplet is a docker-compose.yml, which is the blueprint for the deployment setup, and it looks somewhat like this. Which is also where Caddy and Watchtower is configured.\nversion: \u0026#39;3\u0026#39; services: db: image: postgres:16.2 container_name: db restart: unless-stopped networks: - backend environment: POSTGRES_USER: ${POSTGRES_USER} POSTGRES_PASSWORD: ${POSTGRES_PASSWORD} volumes: - ./data:/var/lib/postgresql/data/ - ./db/init.sql:/docker-entrypoint-initdb.d/init.sql ports: - \u0026#34;5432:5432\u0026#34; healthcheck: test: [\u0026#34;CMD-SHELL\u0026#34;, \u0026#34;pg_isready -U postgres\u0026#34;] interval: 30s timeout: 10s retries: 5 start_period: 10s networks: backend: name: backend Challenges I Encountered During Deployment # Deployment turned out to be one of the most educational and at the same time the most exhausting parts of the project. Even though the application itself was working locally, several issues appeared once I tried to run it as a real deployed system.\nOne of the first problems was that the domain eru-api.dk did not resolve correctly, which meant the API was not reachable from outside the server. Because of that, Caddy could not obtain an HTTPS certificate, and repeated failed certificate attempts eventually triggered Let’s Encrypt rate limiting. This meant I had to stop retrying and wait for the next valid issuance window instead of trying to brute-force the setup.\nAnother issue was that PostgreSQL was running, but the actual eru database did not exist yet inside the container. As a result, the application failed during startup because it could not connect to the expected database. I also ran into a Docker-related issue where Watchtower initially failed because of a Docker API version mismatch between the Watchtower container and the Docker daemon on the server.\nDeployment also exposed configuration and security problems in the project itself. One of my Docker files contained a hardcoded database password that had been tracked in Git, which meant I had to treat that secret as compromised and clean up the configuration. Once the API was publicly accessible, I also realised that some route permissions were too loose for a real deployment. For example, the AI endpoint was initially open to everyone, and regular authenticated users had too much power over content management.\nHow I Solved Them # I solved the deployment issues step by step by working from infrastructure inward. The domain problem was fixed by pointing eru-api.dk to the DigitalOcean droplet correctly through DNS. After that, I allowed DNS propagation to complete and retried the HTTPS setup through Caddy. When Let’s Encrypt rate limiting occurred, I simply waited for the retry window instead of continuing to force certificate requests.\nTo fix the application startup issue, I created the missing eru database directly inside the running PostgreSQL container and then restarted the API container. The Watchtower issue was solved by explicitly setting a compatible Docker API version in the Watchtower configuration so it could communicate with the server’s Docker daemon correctly.\nFor the configuration cleanup, I removed the hardcoded password from the tracked Docker setup, replaced it with environment variables, and added a cleaner .env.example file. I then tightened the authorization model so that the AI endpoint now requires an authenticated user, while content creation, updating, and deletion are restricted to admins instead of regular users.\nMy Final Thoughts # This has been a journey like no other, it has been quite the learning curve, and im now looking forward to get creative with the frontend implementation, and to finally let loose my inner artist and to give eru some colour.\nDeployment can be a bit tricky sometimes, but from every mistake you make, you have one less mistake to make!\nPrevious Chapter Back To Overview ","date":"7 April 2026","externalUrl":null,"permalink":"/projects/eru-project/exam-portfolio/deployment/","section":"Projects","summary":"","title":"08 - Deployment \u0026 The Beginning of the End","type":"projects"},{"content":"Welcome to my exam portfolio for eru, a backend API developed during the third semester backend course.\nThis portfolio documents how the project evolved from an early idea into a deployed Java backend with authentication, content management, user interactions, AI-based elaboration, testing, and deployment.\nThe purpose of this document is not to present a perfect linear success story. Instead, it is meant to show how the system gradually grew as new technologies were introduced week by week, how architectural decisions were made, and how my understanding of backend development developed over time.\neru is a backend for a learning-focused content platform built around short educational posts such as facts, theories, and quotes. The goal of the project was to explore how a scrolling-style application could be used for something more meaningful than passive entertainment.\nThe story behind eru Chapter Overview # 01\nThe Beginning From the first project idea and early scope decisions to the initial domain sketch.\n02\nLaying the Foundation Setting up Hibernate, persistence, DAOs, and the first concrete building blocks of the backend.\n03\nRelationship Challenges Figuring out how the entities should relate and modelling interactions in a way that made domain sense.\n04\nImplementing OpenAI's API Integrating AI elaboration through OpenAI and designing the request flow around it.\n05\neru = REST API Building the HTTP layer, DTO boundaries, route structure, and a cleaner application setup.\n06\nREST and Test Testing the API from the outside with integration tests, JWT checks, and Rest Assured assertions.\n07\nSecurity JWT, authorization, authenticated-user context, and integrating security cleanly into the architecture.\n08\nDeployment \u0026 The Beginning of the End The full CI/CD pipeline, live deployment, and the DNS, HTTPS, database, and secret-handling issues that had to be solved along the way.\n","date":"7 April 2026","externalUrl":null,"permalink":"/projects/eru-project/exam-portfolio/","section":"Projects","summary":"","title":"Exam Portfolio","type":"projects"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/projects/eru-project/","section":"Projects","summary":"","title":"eru","type":"projects"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/blog/first-steps/","section":"Blog","summary":"","title":"First Steps","type":"blog"},{"content":"Hello World and welcome to my portfolio page!\nI\u0026rsquo;m Dino, the architect and engineer behind all the projects on this page. Throughout this site, I will share my experiences and thoughts regarding my adventures in the programming universe, which has definitely been interesting so far. So let\u0026rsquo;s start with some background!\nWhere It Started # My official programming journey started in February 2025, when I enrolled in an AP degree in Computer Science at what was formerly known as the Copenhagen Business Academy.\nBack then, I wrote my first line of code in Processing, the Java-inspired open-source programming language developed in 2001 by Ben Fry and Casey Reas. Processing is specifically designed for artists, designer, and you guessed it! Me!\nBefore The Code # That being said, I\u0026rsquo;ve never been far from a computer. Ever since I gained consciousness, I\u0026rsquo;ve had an interest in any type of electronic device with a screen attached to it. I was born in 1998, so needless to say, the screens I first had my eyes glued to were usually on the bulkier side compared to the paper-thin screens we see nowadays.\nMy first phone was a classic Nokia 3310 (the absolute pinnacle of indestructible technology). It was soon upgraded to a white and red Nokia 5300. From there came a Sony Ericsson Xperia X10 mini, followed by an iPhone 4, which has kept me loyal to Apple ever since.\nThe Question # So why am I telling you this?\nWell, I hadn\u0026rsquo;t, until recently, stopped once and thought: \u0026ldquo;How does my phone work?\u0026rdquo; That curiosity soon developed into: \u0026ldquo;How does any technical program work?\u0026rdquo; And now here we are.\n","date":"7 April 2026","externalUrl":null,"permalink":"/","section":"Portfolio","summary":"","title":"Portfolio","type":"page"},{"content":"","date":"7 April 2026","externalUrl":null,"permalink":"/projects/","section":"Projects","summary":"","title":"Projects","type":"projects"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/blog/","section":"Blog","summary":"","title":"Blog","type":"blog"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/tags/eru/","section":"Tags","summary":"","title":"Eru","type":"tags"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/tags/first-steps/","section":"Tags","summary":"","title":"First Steps","type":"tags"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/tags/processing/","section":"Tags","summary":"","title":"Processing","type":"tags"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/tags/programming/","section":"Tags","summary":"","title":"Programming","type":"tags"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/tags/","section":"Tags","summary":"","title":"Tags","type":"tags"},{"content":"","externalUrl":null,"permalink":"/authors/","section":"Authors","summary":"","title":"Authors","type":"authors"},{"content":"","externalUrl":null,"permalink":"/categories/","section":"Categories","summary":"","title":"Categories","type":"categories"},{"content":" Email: cph-ds303@stud.ek.dk GitHub: github.com/cph-ds303 LinkedIn: linkedin.com/in/dinosaldic ","externalUrl":null,"permalink":"/contact/","section":"Portfolio","summary":"","title":"Contact","type":"page"},{"content":"","externalUrl":null,"permalink":"/series/","section":"Series","summary":"","title":"Series","type":"series"}]