Published on

SamoraLang Playground: How I built it

Authors
  • avatar
    Name
    Jeffer Marcelino Sunde
    Twitter

Before I start explaining how I built the SamoraLang Playground, I think it’s important to explain why I built it. But wait, do you know SamoraLang? If you don’t, visit SamoraLang’s website

When I saw that Ismael GraHms was creating a new programming language, I decided to take a look. To try the language, I needed to download Golang and the SamoraLang repository. That was the issue, I was thinking about a way to try the language without needing to install or download anything. That’s why I got the idea of creating an online playground.

I didn’t have any idea of how to start (I’m not a backend guy), so I started doing a lot of searches. For the architecture (you will see a sequence diagram) that I was considering using, I found out that I needed to learn Docker. I went to The Net Ninja and took that course, and also read a lot about Docker. It took me around 3 days.

Then I started working on the playground, but then I encountered another issue. My playground needed to be prepared to receive and handle multiple requests. And for this, I needed to learn about Redis. I had learned about Redis before, so I just took a look to refresh my memory on some details. After learning everything I needed, it was time to start building.

How the playground really works?

image

This is how my application works, I know, there’s a lot of what we call in Mozambique, “magaiva”, which means improvisation.

I’m not sure if the diagram is entirely clear, but just in case, I will explain in detail.

First, the API receives a POST request containing the code to be executed by the SamoraLang interpreter. When the API receives the code, it generates a taskId and saves the taskId along with the code in the code_execution_task structure in Redis, which contains all the pending tasks.

And after saving, it starts waiting for a response to send to the user. But how does this happen? I created a function that takes the taskId as an argument and looks in the code_execution_results every 500ms to check if it's ready. If it finds a result with the taskId passed as an argument, it sends it back to the user. If not, it continues searching for it.

But okay, who and how is the code_execution_results created? Remember that photo? I have a code-runner container running, and every 1 second it pops the code_executes. You know that the pop function removes and returns the last element from a structure, right? When the code-runner executes pop on code_execution_tasks every second, if the pop returns something, it means there are pending tasks. It executes the tasks and saves the results in code_execution_results. Meanwhile, the API is still waiting. When the code results are ready, the API finds the result and sends it back to the user.

By the way, I’m not entirely sure if this is the best approach for this problem. If you have any suggestions, please feel free to contact me or collaborate on the repositories, both on the backend and frontend.

Why I decided to use this strategy?

Simply put, this was the only approach that came to my mind, especially because I needed to handle multiple users. I chose Redis to establish a connection between the code runner and the API itself.

image

This approach does bring about some security concerns. Right now, I have some ideas about how I could potentially “hack” my own application. That’s why I encourage you to take a look at the code.

I understand that this might not be the best approach, but initially, we aim to make it work, and then we can focus on improving it.

What about scalability?

Well, as I mentioned, I'm not a backend expert, but when I consider improving scalability, it seems like a good idea to add a load balancer and an additional instance of the code runner. The load balancer would be responsible for distributing the requests among the code runners.

My current application can only handle one user at a time. But what if it receives 1000 requests simultaneously? It would take a considerable amount of time to respond to all of them. That’s why I’m considering implementing load balancing to distribute the workload more efficiently.

Difficulties

I encountered a couple of issues while developing the playground, but one of them, which I believe was the most serious, occurred because I was using a platform that would shut down my service if it didn’t receive a request within 15 minutes. This turned out to be a major problem. I explored three different possibilities to address this.

  1. Creating a Recursive Container

The idea was suggested by Ismael GraHms, and he proposed creating a sort of “recursive container.” The concept involved having my API container call itself every 14 minutes. By doing this, the containers would remain active and wouldn’t shut down.

  1. Creating a GitHub Action for Requests

Similar to the first approach, I considered setting up a GitHub action to make a request every 14 minutes. This way, the container would remain active and wouldn’t shut down.

For reference, here’s the actual GitHub Action configuration I utilized:

name: Keep SML Playground Alive
on:
  schedule:
    - cron: "*/14 * * * *"  # Run every 14 minutes

jobs:
  execute_code:
    runs-on: ubuntu-latest
    steps:
      - name: Send POST Request to SML Playground
        run: |
          curl -X POST -H "Content-Type: application/json" -d '{"code": "print(1);"}' https://myapi-endpoint.com

This YAML configuration represents the GitHub Action that I used to periodically send a request to maintain the container’s activity.

Indeed, both ideas were valid, but the issue with these approaches is that they would result in the consumption of unnecessary resources. For instance, if I were to make a request every 14 minutes, I’d end up with approximately 102 requests per day. This would lead to the wasteful use of resources, and I believe it’s not a prudent idea to pursue this approach.

So, after thinking more about how to solve this problem, I found the third idea.

  1. Creating a “Wake-Up Request”

The idea that I found viable involves sending a “wake-up request” when a user enters my application. This request will trigger the container to activate. This approach aims to avoid the wasteful use of resources. Here’s how it works: whenever someone visits the playground’s website, a wake-up request is sent, regardless of whether the user is actively using the playground or not. This approach ensures that unnecessary requests — 102 per day in this case — are not generated.

My “wake-up request” is implemented using a useEffect hook within my React application. When working with React, the useEffect hook is a specialized function that triggers based on specific changes. If you don't specify the changes that should trigger the useEffect, it will only run initially. Interestingly, my "wake-up request" was originally only running at the start and not after any variable changes.

Here’s the code snippet for the useEffect implementation:

useEffect(() => {
  const fetchData = () => {
    axios.get(import.meta.env.VITE_TRIGGER_URL)
      .then(response => {
        console.log('Response:', response.data);
      })
      .catch(() => {
        console.error('Error:');
      });
  };

  fetchData();
}, []);

In the code snippet, the useEffect hook is set up with an empty dependency array ([]), which means it will execute only once, when the component is mounted. To ensure the "wake-up request" runs after a variable change, you should replace the empty dependency array with the appropriate dependency, such as a state variable, that would trigger the request when it changes. This way, the "wake-up request" will be initiated every time the specified variable changes.

In summary, developing the SamoraLang Playground has been an enlightening journey, involving technical challenges and efficient resource use. From inventive solutions for container activity to user experience improvements, this project offers insights into a user-friendly coding space. These shared strategies and insights enhance functionality and embrace the agile spirit of software development. As technology advances, I’m excited to refine these approaches and collaborate with the community for an enhanced SamoraLang Playground. To explore code or contribute, dive into the repositories and help shape this innovative coding platform.