Showing posts with label post. Show all posts
Showing posts with label post. Show all posts

Building an AI-Powered YouTube Clipper using n8n

My colleague Annie loves clipping videos from her favorite creators. You know that feeling when you catch a great moment and turn it into a perfect short? That's her jam. But she kept running into this frustrating problem: by the time she saw a new video and got around to clipping it, everyone else had already done it. She was always late to the party.

When she told me about this, I thought, "What if we could automatically clip videos the moment they're published?" That way, she'd have her clips ready to post while the content is still fresh.

So I put my experience with integration tools to work and built something for her—and for anyone else who has this same problem. And you know what? I'm pretty excited to share it with you.

French version here: Automatiser le clipping vidéo YouTube avec l'IA et n8n

What I Created

I put together an open-source n8n templates that automatically clips YouTube videos using AI. Here's how it works:

  1. It watches for new videos from your favorite YouTube channel
  2. Sends the video to Reka's AI to create clips automatically
  3. Checks when the clips are ready and sends you an email with the download link

The whole thing runs on n8n (it's a free automation platform), and it uses Reka's Clips API to do the AI magic. Best part? It's completely free to use and set up.

How It Actually Works

I built this using two n8n workflows that work together:

Workflow 1: Submit Reel Creation


This one's the watcher. It monitors a YouTube channel's RSS feed, and the moment a new video drops, it springs into action:

  • Grabs the video URL
  • Sends it to Reka's API with instructions like "Create an engaging short video highlighting the best moments"
  • Gets back a job ID so we can track the progress
  • Saves everything to a n8n data table

The cool thing is you can customize how the clips are made. Want vertical videos for TikTok? Done. Need subtitles? Got it. You can set the clip length anywhere from 0 to 30 seconds. It's all in the JSON configuration.

{
  "video_urls": ["{{ $json.link }}"],
  "prompt": "Create an engaging short video highlighting the best moments",
  "generation_config": {
    "template": "moments",
    "num_generations": 1,
    "min_duration_seconds": 0,
    "max_duration_seconds": 30
  },
  "rendering_config": {
    "subtitles": true,
    "aspect_ratio": "9:16"
  }
}

Workflow 2: Check Reel Status


This one's the patient checker. Since AI takes time to analyze a video and create clips (could be several minutes depending on the video length), we need to check in periodically:

  • Looks at all the pending jobs in our data table
  • Asks Reka's API "Hey, is this one done yet?"
  • When a clip is ready, sends you an email with the download link
  • Marks the job as complete so we don't check it again

I set mine to check every 15-30 minutes. No need to spam the API—good things take time! 😉

Setting It Up (It's Easier Than You Think)

When I was helping Annie set this up (you can watch the full walkthrough below), we got it working in just a few minutes. Here's what you need to do:

Step 1: Create Your Data Table

In n8n, create a new data table. Here's a pro tip I learned the hard way: don't name it "videos"—use something like "clip_jobs" or "reel_records" instead. Trust me on this one; it'll save you some headaches.

Your table needs four columns (all strings):

  • video_title - The name of the video
  • video_url - The YouTube URL
  • job_id - The ID Reka gives us to track the clip
  • job_status - Where we are in the process (queued, processing, completed, etc.)

Step 2: Import the Workflows

Download the two JSON files from the GitHub repo and import them into n8n. They'll show up with some errors at first—that's totally normal! We need to configure them.

Step 3: Configure "Submit Reel Creation"

  1. RSS Feed Trigger: Replace my YouTube channel ID with the one you want to monitor. You can find any channel's ID in their channel URL.

  2. API Key: Head to platform.reka.ai and grab your free API key. Pop it into the Bearer Auth field. Give it a memorable name like "Reka API key" so you know what it is later.

  3. Clip Settings: This is where you tell the AI what kind of clips you want. The default settings create one vertical video (9:16 aspect ratio) up to 30 seconds long with subtitles. But you can change anything:

    • The prompt ("Create an engaging short video highlighting the best moments")
    • Duration limits
    • Aspect ratio (square, vertical, horizontal—your choice)
    • Whether to include subtitles
  4. Data Table: Connect it to that table you created in Step 1.

Step 4: Configure "Check Reel Status"

  1. Trigger: Start with the manual trigger while you're testing. Once everything works, switch it to a schedule trigger (I recommend every 15-30 minutes).

  2. API Key: Same deal as before—add your Reka API key.

  3. Email: Update the email node with your email address. You can customize the subject and body if you want, but the default works great.

  4. Data Table: Make sure all the data table nodes point to your table from Step 1.

Watching It Work

When Annie and I tested it live, that moment when the first clip job came through with a "queued" status? That was exciting. Then checking back and seeing "completed"? Even better. And when that email arrived with the download link? Perfect.

The clips Reka AI creates are actually really good. It analyzes the entire video, finds the best key moments (or what ever your prompt asks), adds subtitles, and packages it all up in a format ready for social media.

Wrap Up

This tool works great whether you're a clipper enthusiast or a content creator looking to generate clips for your own channel. Once you set it up, it just runs. New video drops at 3 AM? Your clip is already processing. You wake up to a download link in your inbox.

It's open source and free to use. Take it, customize it, make it your own. And if you come up with improvements or have ideas, I'd love to hear about them. Share your updates on GitHub or join the conversation in the Reka Community Discord.

Watch the Full Setup

I recorded the entire setup process with Annie (she was testing it for the first time). You can see every step, every click, and yes, even the little mistakes we made along the way. That's real learning right there.


Get Started

Ready to try it? Here's everything you need:

🔗 n8n template/ Github: https://link.reka.ai/n8n-clip
🔗 Reka API key: https://link.reka.ai/free (renewable & free)


~frank



Ask AI from Anywhere: No GUI, No Heavy Clients, No Friction

Ever wished you could ask AI from anywhere without needing an interface? Imagine just typing ? and your question in any terminal the moment it pops into your head, and getting the answer right away! In this post, I explain how I wrote a tiny shell script that turns this idea into reality, transforming the terminal into a universal AI client. You can query Reka, OpenAI, or a local Ollama model from any editor, tab, or pipeline—no GUI, no heavy clients, no friction.

Small, lightweight, and surprisingly powerful: once you make it part of your workflow, it becomes indispensable.

💡 All the code scripts are available at: https://github.com/reka-ai/terminal-tools


The Core Idea

There is almost always a terminal within reach—embedded in your editor, sitting in a spare tab, or already where you live while building, debugging, and piping data around. So why break your flow to open a separate chat UI? I wanted to just type a single character (?) plus my question and get an answer right there. No window hopping. No heavy client.

How It Works

The trick is delightfully small: send a single JSON POST request to whichever AI provider you feel like (Reka, OpenAI, Ollama locally, etc.):

# Example: Reka
curl https://api.reka.ai/v1/chat
     -H "X-Api-Key: <API_KEY>" 
     -d {
          "messages": [
            {
              "role": "user",
              "content": "What is the origin of thanksgiving?"
            }
          ],
          "model": "reka-core",
          "stream": false
        }
# Example: Ollama local
curl http://127.0.0.1:11434/api/chat 
-d {  
      "model": "llama3",   
      "messages": [
        {
          "role": "user", 
          "content": "What is the origin of thanksgiving?"
        }], 
      "stream": false
    }

Once we get the response, we extract the answer field from it. A thin shell wrapper turns that into a universal “ask” verb for your terminal. Add a short alias (?) and you have the most minimalist AI client imaginable.

Let's go into the details

Let me walk you through the core script step-by-step using reka-chat.sh, so you can customize it the way you like. Maybe this is a good moment to mention that Reka has a free tier that's more than enough for this. Go grab your key—after all, it's free!

The script (reka-chat.sh) does four things:

  1. Captures your question
  2. Loads an API key from ~/.config/reka/api_key
  3. Sends a JSON payload to the chat endpoint with curl.
  4. Extracts the answer using jq for clean plain text.

1. Capture Your Question

This part of the script is a pure laziness hack. I wanted to save keystrokes by not requiring quotes when passing a question as an argument. So ? What is 32C in F works just as well as ? "What is 32C in F".

if [ $# -eq 0 ]; then
    if [ ! -t 0 ]; then
        QUERY="$(cat)"
    else
        exit 1
    fi
else
    QUERY="$*"
fi

2. Load Your API Key

If you're running Ollama locally you don't need any key, but for all other AI providers you do. I store mine in a locked-down file at ~/.config/reka/api_key, then read and trim trailing whitespace like this:

API_KEY_FILE="$HOME/.config/reka/api_key"
API_KEY=$(cat "$API_KEY_FILE" | tr -d '[:space:]')

3. Send The JSON Payload

Building the JSON payload is the heart of the script, including the API_ENDPOINT, API_KEY, and obviously our QUERY. Here’s how I do it for Reka:

RESPONSE=$(curl -s -X POST "$API_ENDPOINT" \
     -H "X-Api-Key: $API_KEY" \
     -H "Content-Type: application/json" \
     -d "{
  \"messages\": [
    {
      \"role\": \"user\",
      \"content\": $(echo "$QUERY" | jq -R -s .)
    }
  ],
  \"model\": \"reka-core\",
  \"stream\": false
}")

4. Extract The Answer

Finally, we parse the JSON response with jq to pull out just the answer text. If jq isn't installed we display the raw response, but a formatted answer is much nicer. If you are customizing for another provider, you may need to adjust the JSON path here. You can add echo "$RESPONSE" >> data_sample.json to the script to log raw responses for tinkering.

With Reka, the response look like this:

{
    "id": "cb7c371b-3a7b-48d2-829d-70ffacf565c6",
    "model": "reka-core",
    "usage": {
        "input_tokens": 16,
        "output_tokens": 460,
        "reasoning_tokens": 0
    },
    "responses": [
        {
            "finish_reason": "stop",
            "message": {
                "role": "assistant",
                "content": " The origin of Thanksgiving ..."
            }
        }
    ]
}
The value we are looking for and want to display is the `content` field inside `responses[0].message`. Using `jq`, we do:
echo "$RESPONSE" | jq -r '.responses[0].message.content // .error // "Error: Unexpected response format"'

Putting It All Together

Now that we have the script, make it executable with chmod +x reka-chat.sh, and let's add an alias to your shell config to make it super easy to use. Add one line to your .zshrc or .bashrc that looks like this:

alias \\?=\"$REKA_CHAT_SCRIPT\"

Because ? is a special character in the shell, we escape it with a backslash. After adding this line, reload your shell configuration with source ~/.zshrc or source ~/.bashrc, and you are all set!

The Result

Now you can ask questions directly from your terminal. Wanna know what is origin of Thanksgiving, ask it like this:

? What is the origin of Thanksgiving

And if you want to keep the quotes, please you do you!

Extra: Web research

I couldn't stop there! Reka also supports web research, which means it can fetch and read web pages to provide more informed answers. Following the same pattern described previously, I wrote a similar script called reka-research.sh that sends a request to Reka's research endpoint. This obviously takes a bit more time to answer, as it's making different web queries and processing them, but the results are often worth the wait—and they are up to date! I used the alias ?? for this one.

On the GitHub repository, you can find both scripts (reka-chat.sh and reka-research.sh) along with a script to create the aliases automatically. Feel free to customize them to fit your workflow and preferred AI provider. Enjoy the newfound superpower of instant AI access right from your terminal!

What's Next?

With this setup, the possibilities are endless. Reka supports questions related to audio and video, which could be interesting to explore next. The project is open source, so feel free to contribute or suggest improvements. You can also join the Reka community on Discord to share your experiences and learn from others.


Resources




Check-In Doc MCP Server: A Handy Way to Search Only the Docs You Trust

Ever wished you could ask a question and have the answer come only from a handful of trusted documentation sites—no random blogs, no stale forum posts? That’s exactly what the Check-In Doc MCP Server does. It’s a lightweight Model Context Protocol (MCP) server you can run locally (or host) to funnel questions to selected documentation domains and get a clean AI-generated answer back.

What It Is

The project (GitHub: https://github.com/fboucher/check-in-doc-mcp) is a Dockerized MCP server that:

  • Accepts a user question.
  • Calls the Reka AI Research API with constraints (only allowed domains).
  • Returns a synthesized answer based on live documentation retrieval.

You control which sites are searchable by passing a comma‑separated list of domains (e.g. docs.reka.ai,docs.github.com). That keeps > results focused, reliable, and relevant.

What Is the Reka AI Research API?

Reka AI’s Research API lets you blend language model reasoning with targeted, on‑the‑fly web/document retrieval. Instead of a model hallucinating an answer from static training data, it can:

  • Perform limited domain‑scoped web searches.
  • Pull fresh snippets.
  • Integrate them into a structured response.

In this project, we use the research feature with a web_search block specifying:

  • allowed_domains: Only the documentation sites you trust.
  • max_uses: Caps how many retrieval calls it makes per query (controls cost & latency).

Details used here:

  • Model: reka-flash-research
  • Endpoint: http://api.reka.ai/v1/chat/completions
  • Auth: Bearer API key (generated from the Reka dashboard: https://link.reka.ai/free)

How It Works Internally

The core logic lives in ResearchService (src/Domain/ResearchService.cs). Simplified flow:

  1. Initialization
    Stores the API key + array of allowed domains, sets model & endpoint, logs a safe startup message.

  2. Build Request Payload
    The CheckInDoc(string question) method creates a JSON payload:

    var requestPayload = new {
      model,
      messages = new[] { new { role = "user", content = question } },
      research = new {
        web_search = new {
          allowed_domains = allowedDomains,
          max_uses = 4
        }
      }
    };
    
  3. Send Request
    Creates a HttpRequestMessage (POST), adds Authorization: Bearer <APIKEY>, sends JSON to Reka.

  4. Parse Response
    Deserializes into a RekaResponse domain object, returns the first answer string.

Adding It to VS Code (MCP Extension)

You can run it as a Docker-based MCP server. Two simple approaches:

Option 1: Via “Add MCP Server” UI

  1. In VS Code (with MCP extension), click Add MCP Server.
  2. Choose type: Docker image.
  3. Image name: fboucher/check-in-doc-mcp.
  4. Enter allowed domains and your Reka API key when prompted.

Option 2: Via mcp.json (Recommended)

Alternatively, you can manually configure it in your mcp.json file. This will make sure your API key isn't displayed in plain text. Add or merge this configuration:

{
  "servers": {
    "check-in-docs": {
      "type": "stdio",
      "command": "docker",
      "args": [
        "run",
        "-i",
        "--rm",
        "-e",
        "ALLOWED_DOMAINS=${input:allowed_domains}
        ",
        "-e",
        "APIKEY=${input:apikey}",
        "fboucher/check-in-doc-mcp"
      ]
    }
  },
  "inputs": [
    {
      "id": "allowed_domains",
      "type": "promptString",
      "description": "Enter the comma-separated list of documentation domains to allow (e.g. docs.reka.ai,docs.github.com):"
    },
    {
      "id": "apikey",
      "type": "promptString",
      "password": true,
      "description": "Enter your Reka Platform API key:"
    }
  ]
}

How to Use It

To use it ask to Check In Doc something or You can now use the SearchInDoc tool in your MCP-enabled environment. Just ask a question, and it will search only the specified documentation domains.

Final Thoughts

It’s intentionally simple—no giant orchestration layer. Just a clean bridge between a question, curated domains, and a research-enabled model. Sometimes that’s all you need to get focused, trustworthy answers.

If this sparks an idea, clone it and adapt away. If you improve it (citations, richer error handling, multi-turn context)—send a PR!

Watch a quick demo


Links & References

AI Vision: Turning Your Videos into Comedy Gold (or Cringe)

I've spent most of my career building software in C# and .NET, and only used Python in IoT projects. When I wanted to build a fun project—an app that uses AI to roast videos, I knew it was the perfect opportunity to finally dig into Python web development.

The question was: where do I start? I hopped into a brainstorming session with Reka's AI chat and asked about options for building web apps in Python. It mentioned Flask, and I remembered friends talking about it being lightweight and perfect for getting started. That sounded right.

In this post, I share how I built "Roast My Life," a Flask app using the Reka Vision API.

The Vision (Pun Intended)

The app needed three core things:

  1. List videos: Show me what videos are in my collection
  2. Upload videos: Let me add new ones via URL
  3. Roast a video: Send a selected video to an AI and get back some hilarious commentary

See it in action

Part 1: Getting Started Environment Setup

The first hurdle was always going to be environment setup. I'm serious about keeping my Python projects isolated, so I did the standard dance:

Before even touching dependencies, I scaffolded a super bare-bones Flask app. Then one thing I enjoy from C# is that all dependencies are brought in one shot, so I like doing the same with my python projects using requirements.txt instead of installing things ad‑hoc (pip install flask then later freezing).

Dropping that file in first means the setup snippet below is deterministic. When you run pip install -r requirements.txt, Flask spins up using the exact versions I tested with, and you won't accidentally grab a breaking major update.

Here's the shell dance that activates the virtual environment and installs everything:

cd roast_my_life/workshop
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt

Then came the configuration. We will need an API key and I don't want to have it hardoced so I created a .env file to store my API credentials:

API_KEY=YOUR_REKA_API_KEY
BASE_URL=https://vision-agent.api.reka.ai

To get that API key, I visited the Reka Platform and grabbed a free one. Seriously, a free key for playing with AI vision APIs? I was in.

With python app.py, I fired up the Flask development server and opened http://127.0.0.1:5000 in my browser. The UI was there, but... it was dead. Nothing worked.

Perfect. Time to build.

The Backend: Flask Routing and API Integration

Coming from ASP.NET Core's controller-based routing and Blazor, Flask's decorator-based approach felt just like home. All the code code goes in the app.py file, and each route is defined with a simple decorator. But first things first: loading configuration from the .env file using python-dotenv:

from flask import Flask, request, jsonify
import requests
import os
from dotenv import load_dotenv

app = Flask(__name__)

# Load environment variables (like appsettings.json)
load_dotenv()
api_key = os.environ.get('API_KEY')
base_url = os.environ.get('BASE_URL')

All the imports packages are the same ones that needs to be in the requirements.txt. And we retreive the API key and base URL from environment variables, just like in .NET Core.

Now, to be able to get roasted we need first to upload a video to the Reka Vision API. Here's the code—I'll go over some details after.


@app.route('/api/upload_video', methods=['POST'])
def upload_video():
    """Upload a video to Reka Vision API"""
    data = request.get_json() or {}
    video_name = data.get('video_name', '').strip()
    video_url = data.get('video_url', '').strip()
    
    if not video_name or not video_url:
        return jsonify({"error": "Both video_name and video_url are required"}), 400
    
    if not api_key:
        return jsonify({"error": "API key not configured"}), 500
    
    try:
        response = requests.post(
            f"{base_url.rstrip('/')}/videos/upload",
            headers={"X-Api-Key": api_key},
            data={
                'video_name': video_name,
                'index': 'true',  # Required: tells Reka to process the video
                'video_url': video_url
            },
            timeout=30
        )
        
        response_data = response.json() if response.ok else {}
        
        if response.ok:
            video_id = response_data.get('video_id', 'unknown')
            return jsonify({
                "success": True,
                "video_id": video_id,
                "message": "Video uploaded successfully"
            })
        else:
            error_msg = response_data.get('error', f"HTTP {response.status_code}")
            return jsonify({"success": False, "error": error_msg}), response.status_code
            
    except requests.Timeout:
        return jsonify({"success": False, "error": "Request timed out"}), 504
    except Exception as e:
        return jsonify({"success": False, "error": f"Upload failed: {str(e)}"}), 500

Once the information from the frontend is validated we make a POST request to the Reka Vision API's /videos/upload endpoint. The parameters are sent as form data, and we include the API key in the headers for authentication. Here I was using URLs to upload videos, but you can also upload local files by adjusting the request accordingly. As you can see, it's pretty straightforward, and the documentation from Reka made it easy to understand what was needed.

The Magic: Sending Roast Requests to Reka Vision API

Here's where things get interesting. Once a video is uploaded, we can ask the AI to analyze it and generate content. The Reka Vision API supports conversational queries about video content:

def call_reka_vision_qa(video_id: str) -> Dict[str, Any]:
    """Call the Reka Video QA API to generate a roast"""
    
    headers = {'X-Api-Key': api_key} if api_key else {}
    
    payload = {
        "video_id": video_id,
        "messages": [
            {
                "role": "user",
                "content": "Write a funny and gentle roast about the person, or the voice in this video. Reply in markdown format."
            }
        ]
    }
    
    try:
        resp = requests.post(
            f"{base_url}/qa/chat",
            headers=headers,
            json=payload,
            timeout=30
        )
        
        data = resp.json() if resp.ok else {"error": f"HTTP {resp.status_code}"}
        
        if not resp.ok and 'error' not in data:
            data['error'] = f"HTTP {resp.status_code} calling chat endpoint"
        
        return data
        
    except requests.Timeout:
        return {"error": "Request to chat API timed out"}
    except Exception as e:
        return {"error": f"Chat API call failed: {e}"}

Here we pass the video ID and a prompt asking for a "funny and gentle roast." The API responds with AI-generated content, which we can then send back to the frontend for display. I try to give more "freedom" to the AI by asking it to reply in markdown format, which makes the output more engaging.

Try It Yourself!

The complete project is available on GitHub: reka-ai/api-examples-python

What Makes Reka Vision API So nice to use

What really stood out to me was how approachable the Reka Vision API is. You don't need any special SDK—just the requests library making standard HTTP calls. And honestly, it doesn't matter what language you're used to; an HTTP call is pretty much always simple to do. Whether you're coming from .NET, Python, JavaScript, or anything else, you're just sending JSON and getting JSON back.

Authentication is refreshingly straightforward: just pop your API key in the header and you're good to go. No complex SDKs, no multi-step authentication flows, no wrestling with binary data streams. The conversational interface lets you ask questions in natural language, and you get back structured JSON responses with clear fields.

One thing worth noting: in this example, the videos are pre-uploaded and indexed, which means the responses come back fast. But here's the impressive part—the AI actually looks at the video content. It's not just reading a transcript or metadata; it's genuinely analyzing the visual elements. That's what makes the roasts so spot-on and contextual.

Final Thoughts

The Reka Vision API itself deserves credit for making video AI accessible. No complicated SDKs, no multi-GB model downloads, no GPU requirements. Just simple HTTP requests and powerful AI capabilities. I'm not saying I'm switching to Python full-time, but expect to see me sharing more Python projects in the future!

References and Resources


Using AI with .NET 10 Scripts: What Worked, What Didn’t, and Lessons Learned

I wanted to kick the tires on the upcoming .NET 10 C# script experience and see how far I could get calling Reka’s Research LLM from a single file, no project scaffolding, no .csproj. This isn’t a benchmark; it’s a practical tour to compare ergonomics, setup, and the little gotchas you hit along the way. I’ll share what worked, what didn’t, and a few notes you might find useful if you try the same.

All the sample code (and a bit more) is here: reka-ai/api-examples-dotnet · csharp10-script. The scripts run a small “top 3 restaurants” prompt so you can validate everything quickly.

We’ll make the same request in three ways:

  1. OpenAI SDK
  2. Microsoft.Extensions.AI for OpenAI
  3. Raw HttpClient

What you need

The C# "script" feature used below ships with the upcoming .NET 10 and is currently available in preview. If you prefer not to install a preview SDK, you can run everything inside the provided Dev Container or on GitHub Codespaces. I include a .devcontainer folder with everything set up in the repo.

Set up your API key

We are talking about APIs here, so of course, you need an API key. The good news is that it's free to sign up with Reka and get one! It's a 2-click process, more details in the repo. The API key is then stored in a .env file, and each script loads environment variables using DotNetEnv.Env.Load(), so your key is picked up automatically. I went this way instead of using dotnet user-secrets because I thought it would be the way it would be done in a CI/CD pipeline or a quick script.

Run the demos

From the csharp10-script folder, run any of these scripts. Each line is an alternative

dotnet run 1-try-reka-openai.cs
dotnet run 2-try-reka-ms-ext.cs
dotnet run 3-try-reka-http.cs

You should see a short list of restaurant suggestions.

Propmt Result: 3 restaurants


OpenAI SDK with a custom endpoint

Reka's API is using the OpenAI format; therefore, I thought of using the NuGet package OpenAI. To reference a package in a script, you use the #:package [package name]@[package version] directive at the top of the file. Here is an example:

#:package OpenAI@2.3.0

// ...

var baseUrl = "http://api.reka.ai/v1";

var openAiClient = new OpenAIClient(new ApiKeyCredential(REKA_API_KEY), new OpenAIClientOptions
{
    Endpoint = new Uri(baseUrl)
});

var client = openAiClient.GetChatClient("reka-flash-research");

string prompt = "Give me 3 nice, not crazy expensive, restaurants for a romantic dinner in Montreal";

var completion = await client.CompleteChatAsync(
    new List<ChatMessage>
    {
        new UserChatMessage(prompt)
    }
);

var generatedText = completion.Value.Content[0].Text;

Console.WriteLine($" Result: \n{generatedText}");

The rest of the code is more straightforward. You create a chat client, specify the Reka API URL, select the model, and then you send a prompt. And it works just as expected. However, not everything was perfect, but before I share more about that part, let's talk about Microsoft.Extensions.AI.

Microsoft Extensions AI for OpenAI

Another common way to use LLM in .NET is to use one ot the Microsoft.Extensions.AI NuGet package. In our case Microsoft.Extensions.AI.OpenAI was used.

#:package Microsoft.Extensions.AI.OpenAI@9.8.0-preview.1.25412.6

// ....

var baseUrl = "http://api.reka.ai/v1";

IChatClient client = new ChatClient("reka-flash-research", new ApiKeyCredential(REKA_API_KEY), new OpenAIClientOptions
{
    Endpoint = new Uri(baseUrl)
}).AsIChatClient();

string prompt = "Give me 3 nice, not crazy expensive, restaurants for a romantic dinner in Montreal";

Console.WriteLine(await client.GetResponseAsync(prompt));

As you can see, the code is very similar. Create a chat client, set the URL, the model, and add your prompt, and it works just as well.

That's two ways to use Reka API with different SDKs, but maybe you would prefer to go "SDKless", let's see how to do that.

Raw HttpClient calling the REST API

Without any SDK to help, there is a bit more line of code to write, but it's still pretty straightforward. Let's see the code:

using var httpClient = new HttpClient();

var baseUrl = "http://api.reka.ai/v1/chat/completions";

var requestPayload = new
{
    model = "reka-flash-research",
    messages = new[]
            {
                new
                {
                    role = "user",
                    content = "Give me 3 nice, not crazy expensive, restaurants for a romantic dinner in New York city"
                }
            }
};

using var request = new HttpRequestMessage(HttpMethod.Post, baseUrl);
request.Headers.Add("Authorization", $"Bearer {REKA_API_KEY}");
request.Content = new StringContent(jsonPayload, Encoding.UTF8, "application/json");

var response = await httpClient.SendAsync(request);

var responseContent = await response.Content.ReadAsStringAsync();

var jsonDocument = JsonDocument.Parse(responseContent);

var contentString = jsonDocument.RootElement
    .GetProperty("choices")[0]
    .GetProperty("message")
    .GetProperty("content")
    .GetString();

Console.WriteLine(contentString);

So you create an HttpClient, prepare a request with the right headers and payload, send it, get the response, and parse the JSON to extract the text. In this case, you have to know the JSON structure of the response, but it follows the OpenAI format.

What did I learn from this experiment?

I used VS Code while trying the script functionality. One thing that surprised me was that I didn't get any IntelliSense or autocompletion. I try to disable the DevKit extension and change the setting for OmniSharp, but no luck. My guess is that because it's in preview, and it will work just fine in November 2025 when .NET 10 will be released.

In this light environment, I encountered some issues where, for some reason, I couldn't use an https endpoint, so I had to use http. In the raw httpClient script, I had some errors with the Reflection that wasn't available. It could be related to the preview or something else, I didn't investigate further.

For the most part, everything worked as expected. You can use C# code to quickly execute some tasks without any project scaffolding. It's a great way to try out the Reka API and see how it works.

What's Next?

While writing those scripts, I encountered multiple issues that aren't related to .NET but more about the SDKs when trying to do more advanced functionalities like optimization of the query and formatting the response output. Since it goes beyond the scope of this post, I will share my findings in a follow-up post. Stay tuned!

Video version

Here is a video version of this post


Learn more

Get an AI assistant to do your research

Introducing Reka Research: Your AI Research Assistant

An AI Assistant searching in documents
Meet Reka Research a powerful AI agent that can search the web and analyze your files to answer complex questions in minutes. Whether you're staying up to date with AI news, screening resumes, or researching technical topics, Reka Research does the heavy lifting for you.


What Makes Reka Research Special?

Reka Research stands out in four key areas:

  • Top Performance: Best in class results on research benchmarks
  • Fast Results: Get thorough answers in 1-3 minutes
  • Full Transparency: See exactly how the AI reached its conclusions. All steps are visible.

Smart Web Search That Actually Works

Ever wished you could ask someone to research the latest AI developments while you focus on other work? That's exactly what Reka Research does.

Watch how it works:

In this demo, Jess and Sharath shows how Reka Research can automatically gather the most important AI news from the past week. The AI visits multiple websites, takes notes, and presents a clean summary with sources. You can even restrict searches to specific domains or set limits on how many sites to check.

File Search for Your Private Documents

Sometimes the information you need isn't on the web - it's in your company's documents, meeting notes, or file archives. Reka Research can search through thousands of private files to find exactly what you're looking for.

See it in action:  

In this example, ess and Sharath shows how HR teams can use Reka Research to quickly screen resumes. Instead of manually reviewing hundreds of applications, the AI finds candidates who meet specific requirements (like having a computer science degree and 3+ years of backend experience) in seconds!

Writing Better Prompts Gets Better Results

Like any AI tool, Reka Research works best when you know how to ask the right questions. The key is being specific about what you want and providing context.

Learn the techniques:

Jess and Yi shares practical tips for getting the most out of Reka Research. Instead of asking "summarize meeting minutes," try "summarize April meeting minutes about public participation." The more specific you are, the better your results will be.

Ready to Try Reka Research?

Reka Research is currently available for everyone! Try it via the playground, or using directly the API. Whether you're researching competitors, analyzing documents, or staying current with industry trends, it can save you hours of work.

Want to learn more and connect with other users? Join our Discord community where you can:

  • Get tips from other Reka Research users
  • Share your use cases and success stories
  • Stay updated on new features and releases
  • Ask questions directly to our team

Join our Discord Community

Ready to transform how you research? Visit reka.ai to learn more about Reka Research and our other AI tools.

Why Your .NET Code Coverage Badge is 'Unknown' in GitLab (And How to Fix It)


In a recent post, I shared how to set up a CI/CD pipeline for a .NET Aspire project on GitLab. The pipeline includes unit tests, security scanning, and secret detection, and if any of those fail, the pipeline would fail. Great, but what about code coverage for the unit tests? The pipeline included code coverage commands, but the coverage was not visible in the GitLab interface. Let's fix that.

(blog post en français ici)

Badge on Gitlab showing coverage unknown

The Problem

One thing I initially thought was that the regex used to extract the coverage was incorrect. The regex used in the pipeline was:

coverage: '/Total\s*\|\s*(\d+(?:\.\d+)?)%/'

That regex came directly from the GitLab documentation, so I thought it should work correctly. However, coverage still wasn't visible in the GitLab interface.

So with the help of GitHub Copilot, we wrote a few commands to validate:

  • That the coverage.cobertura.xml was in a consistent location (instead of being in a folder with a GUID name)
  • That the coverage.cobertura.xml file was in a valid format
  • What exactly the regex was looking for

Everything checked out fine, so why was the coverage not visible?

The Solution

It turns out that the coverage command with the regex expression is scanning the console output and not the coverage.cobertura.xml file. Aha! One solution was to install dotnet-tools to changing where the the test results was persisted; to the console instead of the XML file, but I preferred keeping the .NET environment unchanged.

The solution I ended up implementing was executing a grep command to extract the coverage from the coverage.cobertura.xml file and then echoing it to the console. Here's what it looks like:

- COVERAGE=$(grep -o 'line-rate="[0-9.]*"' TestResults/coverage.cobertura.xml | head -1 | grep -o '[0-9.]*' | awk '{printf "%.1f", $1*100}')
- echo "Total | ${COVERAGE}%"

Results

And now when the pipeline runs, the coverage is visible in the GitLab pipeline!



And the badge is updated to show the coverage percentage.

Coverage badge showing percentage


Complete Configuration

Here's the complete test job configuration. Of course, the full .gitlab-ci.yml file is available in the GitLab repository.

test:
  stage: test
  image: mcr.microsoft.com/dotnet/sdk:9.0
  <<: *dotnet_cache
  dependencies:
    - build
  script:
    - dotnet test $SOLUTION_FILE --configuration Release --logger "junit;LogFilePath=$CI_PROJECT_DIR/TestResults/test-results.xml" --logger "console;verbosity=detailed" --collect:"XPlat Code Coverage" --results-directory $CI_PROJECT_DIR/TestResults
    - find TestResults -name "coverage.cobertura.xml" -exec cp {} TestResults/coverage.cobertura.xml \;
    - COVERAGE=$(grep -o 'line-rate="[0-9.]*"' TestResults/coverage.cobertura.xml | head -1 | grep -o '[0-9.]*' | awk '{printf "%.1f", $1*100}')
    - echo "Total | ${COVERAGE}%"
  artifacts:
    when: always
    reports:
      junit: "TestResults/test-results.xml"
      coverage_report:
        coverage_format: cobertura
        path: "TestResults/coverage.cobertura.xml"
    paths:
      - TestResults/
    expire_in: 1 week
  coverage: '/Total\s*\|\s*(\d+(?:\.\d+)?)%/'

Conclusion

I hope this helps others save time when setting up code coverage for their .NET projects on GitLab. The key insight is that GitLab's coverage regex works on console output, not on the files (XML or other formats).

If you have any questions or suggestions, feel free to reach out!


~frank



How to Have GitLab CI/CD for a .NET Aspire Project

Getting a complete CI/CD pipeline for your .NET Aspire solution doesn't have to be complicated. I've created a template that gives you everything you need to get started in minutes.

(blog post en français ici)

Watch the Video


Part 1: The Ready-to-Use Template

I've built a .NET Aspire template that comes with everything configured and ready to go. Here's what you get:

What's Included

  • A classic .NET Aspire Starter project (API and frontend)
  • Unit tests using xUnit (easily adaptable to other testing frameworks)
  • Complete .gitlab-ci.yml pipeline configuration
  • Security scanning and secret detection
  • All documentation you need

What the Pipeline Does

The pipeline runs two main jobs automatically:

  1. Build: Compiles your code
  2. Test: Runs all unit tests, scans for vulnerabilities, and checks for accidentally committed secrets (API keys, passwords, etc.)

You can see all test results directly in GitLab's interface, making it easy to track your project's health.

How to Get Started

It's simple:

  1. Clone the template repository: cloud5mins/aspire-template
  2. Replace the sample project with your own .NET Aspire code
  3. Push to your GitLab repository
  4. Watch your CI/CD pipeline run automatically

That's it! You immediately get automated builds, testing, and security scanning.

Pro Tip: The best time to set up CI/CD is when you're just starting your project because everything is still simple.


Part 2: Building the Template with GitLab Duo

Now let me share my experience creating this template using GitLab's AI assistant, GitLab Duo.

Starting Simple, Growing Smart

I didn't build this complex pipeline all at once. I started with something very basic and used GitLab Duo to gradually add features. The AI helped me:

  • Add secret detection when I asked: "How can I scan for accidentally committed secrets?"
  • Fix test execution issues when my unit tests weren't running properly
  • Optimize the pipeline structure for better performance
screen capture in VSCode using GitLab Duo to change the default location for the job SAST

Working with GitLab in VS Code

While you can edit .gitlab-ci.yml files directly in GitLab's web interface, I prefer VS Code. Here's my setup:

  1. Install the official GitLab extension from the VS Code marketplace

Once you've signed in, this extension gives you:

  • Direct access to GitLab issues and work items
  • AI-powered chat with GitLab Duo

GitLab Duo in Action

GitLab Duo became my pair programming partner. Here's how I used it:

Understanding Code: I could type /explain and ask Duo to explain what any part of my pipeline configuration does by highlighting that section.

screen capture in VSCode using GitLab Duo to explain part of the code

Solving Problems: When my solution didn't compile, I described the issue to Duo and got specific suggestions. For example, it helped me realize some projects weren't in .NET 9 because dotnet build required the Aspire workload. I could either keep my project in .NET 8 and add a before_script instruction to install the workload or upgrade to .NET 9; I picked the latest.

Adding Features: I started with just build and test, then incrementally asked Duo to help me add security scanning, secret detection, and better error handling.

Adding Context: Using /include to add the project file or the .gitlab-ci.yml file while asking questions helped Duo understand the context better.

Learn More with the Docs: During my journey, I knew Duo wasn't just making things up as it was referencing the documentation. I could continue my learning there and read more examples of how before_script is used in different contexts.

The AI-Assisted Development Experience

What impressed me most was how GitLab Duo helped me learn while building. Instead of just copying configurations from documentation, each conversation taught me something new about GitLab CI/CD best practices.

Conclusion

I think this template can be useful for anyone starting a .NET Aspire project. Ready to try it? Clone the template at cloud5mins/aspire-template and start building with confidence.

Whether you're new to .NET Aspire or CI/CD, this template gives you a good foundation. And if you want to customize it further, GitLab Duo is there to help you understand and modify the configuration.

If you think we should add more features or improve the template, feel free to open an issue in the repository. Your feedback is always welcome!

[Screen capture of the Aspire template project on GitLab




Thanks to ‪David Fowler‬ for his feedback!

How to convert code with GitHub Copilot, can AI really help?

Recently, someone asked me an interesting question: "Can GitHub Copilot or AI help me convert an application from one language to another?" My answer was a definitive yes! AI can not only help you write code in a new language, but it can also improve team collaboration and bridge the knowledge gap between developers who know different programming languages.

(Version française ici)

Setting Up the Environment

To demonstrate this capability, I decided to convert a COBOL application to Java—a perfect test case since I don't know either language well, which means I really needed Copilot to do the heavy lifting. All the code is available on GitHub.

The first step was setting up a proper development environment. I used a dev container and asked Copilot to help me build it. I also asked for recommendations on the best VS Code extensions for Java development. Within just a few minutes, I had a fully configured environment ready for Java development.

Choosing the Right Copilot Agent

When working with GitHub Copilot for code conversion, you have different mode to choose from:

  • Ask: Great for general questions (like asking about Java extensions)
  • Edit: Perfect for simple document editing (like modifying the generated code)
  • Agent: The powerhouse for complex tasks involving multiple files, imports, and structural changes

For code conversion projects, the Agent is your best friend. It can look at different source files, understand project structure, edit code, and even create new files on your behalf.

The Conversion Process

I used Claude 3.5 Sonnet for this conversion. Here's the simple prompt I used:

"Convert this hello business COBOL application into Java"

Copilot didn't just convert the code, it also provided detailed information about how to execute the Java application, which was invaluable since I had no Java experience.

The results varied depending on the AI model used (Claude, GPT, Gemini, etc.), but the core functionality remained consistent across different attempts. Since the original application was simple, I converted it multiple times using different prompts and models to test the consistency. Sometimes it generated a single file, other times it created multiple files: a main application and an Employee class (which wasn't in my original COBOL version). Sometimes it updated the Makefile to allow compilation and execution using make, while other times it provided instructions to use javac and java commands directly.

This variability is expected with generative AI results will differ between runs, but the core functionality remains reliable.

Real-World Challenges

Of course, the conversion wasn't perfect on the first try. For example, I encountered runtime errors when executing the application. The issue was with the data format—the original file used a flat file format with fixed length records (19 characters per record) and no line breaks.

I went back to Copilot, highlighted the error message from the terminal, and provided additional context about the 19 character record format. This iterative approach is key to successful AI assisted conversion.

"It's not working as expected, check the error in #terminalSelection. The records have fixed length of 19 characters without line breaks. Adjust the code to handle this format"

The Results

After the iterative improvements, my Java application successfully:

  • Compiled without errors
  • Processed all employee records
  • Generated a report with employee data
  • Calculated total salary (a nice addition that wasn't in the original)

While the output format wasn't identical to the original COBOL version (missing leading zeros, different line spacing), the core functionality was preserved.

Video Demonstration

Watch the complete conversion process in action:

Best Practices for AI-Assisted Code Conversion

Based on this experience, here are my recommendations:

1. Start with Small Pieces

Don't try to convert thousands of lines at once. Break your conversion into manageable modules or functions.

2. Set Up Project Standards

Consider creating a .github folder at your project root with an instructions.md file containing:

  • Best practices for your target language
  • Patterns and tools to use
  • Specific versions and frameworks
  • Enterprise standards to follow

3. Stay Involved in the Process

You're not just a spectator - you're an active participant. Review the changes, test the output, and provide feedback when things don't work as expected.

4. Iterate and Improve

Don't expect perfection on the first try. In my case, the converted application worked but produced slightly different output formatting. This is normal and expected, after all you are converting between two different languages with different conventions and styles.

Can AI Really Help with Code Conversion?

Absolutely, yes! GitHub Copilot can significantly:

  • Speed up the conversion process
  • Help with syntax and language specific patterns
  • Provide guidance on running and compiling the target language
  • Bridge knowledge gaps between team members
  • Generate supporting files and documentation

However, remember that it's generative AI, results will vary between runs, and you shouldn't expect identical output every time.

Final Thoughts

GitHub Copilot is definitely a tool you need in your toolkit for conversion projects. It won't replace the need for human oversight and testing, but it will dramatically accelerate the process and help teams collaborate more effectively across different programming languages.

The key is to approach it as a collaborative process where AI does the heavy lifting while you provide guidance, context, and quality assurance. Start small, iterate often, and don't be afraid to ask for clarification or corrections when the output isn't quite right.

Have you tried using AI for code conversion? I'd love to hear about your experiences in the comments below! Visit c5m.ca/copilot to get started with GitHub Copilot.

References


Stop Writing Git Commits: How AI-Powered GitKraken CLI Accelerates Your Development

As developers, we're constantly looking for tools that can help us stay in the flow and be more productive. Today, I want to share a powerful tool that's been gaining traction in the developer community: GitKraken CLI. This command-line interface brings together several key features that modern developers love - it's AI-powered, terminal-based, and incredibly efficient for managing Git workflows.

(Version française ici)

What Makes GitKraken CLI Special?

GitKraken CLI (accessible via the gk command) stands out because it simplifies complex Git workflows while adding intelligent automation. Unlike traditional Git commands, it provides a more intuitive workflow management system that can handle multiple repositories simultaneously.

Getting Started

Installation is straightforward. On Windows, you can install it using:

winget install gitkraken.cli

Once installed, you'll have access to the gk command, which becomes your gateway to streamlined Git operations.

The Workflow in Action

Let's walk through a typical development session using GitKraken CLI:

1. Starting a Work Session

Instead of manually creating branches and switching contexts, you can start a focused work session:

gk w start "Add Behind my Cloud feed" -i "Add Behind my Cloud feed #1"

This single command:

  • Creates a new branch based on your issue/feature name
  • Switches to that branch automatically
  • Links the work session to a specific issue
  • Sets up your development environment for focused work

2. Managing Multiple Work Sessions

You can easily see all your active work sessions:

gk w list

This is particularly powerful when working across multiple repositories or juggling several features simultaneously.

3. Committing with Intelligence

After making your changes, adding files works as expected:

gk add .

But here's where the AI magic happens. Instead of writing commit messages manually:

gk w commit --ai

The AI analyzes your changes and generates meaningful, descriptive commit messages automatically. No more "quick fix" or "update stuff" commits!

4. Pushing and Creating Pull Requests

Publishing your work is equally streamlined:

gk w push

And when you're ready to create a pull request:

gk w pr create --ai

Again, AI assistance helps generate appropriate PR titles and descriptions based on your work.

5. Wrapping Up

Once your work is complete and merged, clean up is simple:

gk w end

This command:

  • Switches you back to the main branch
  • Deletes the feature branch, locally and on GitHub
  • Closes the work session
  • Leaves your repository clean and ready for the next task
all the commands


Why This Matters

The beauty of GitKraken CLI lies in its ability to keep you in the zone. You don't need to:

  • Switch between multiple tools
  • Remember complex Git commands
  • Write commit messages from scratch
  • Manually manage branch lifecycle

Everything flows naturally from one command to the next, maintaining your focus on what matters most: writing code.

Multi-Repository Power

One of the standout features is GitKraken CLI's ability to manage multiple repositories simultaneously. This is invaluable for:

  • Microservices architectures
  • Full-stack applications with separate frontend/backend repos
  • Organizations with multiple related projects

Try It Yourself

GitKraken CLI is part of a broader suite of developer tools that GitKraken offers. The CLI itself is free to use, which makes it easy to experiment with and integrate into your workflow without any upfront commitment. If you find value in the CLI and want to explore their other tools, GitKraken has various products that might complement your development setup.

The learning curve is genuinely minimal since it builds on Git concepts you already know while adding helpful automation. I've found that even small workflow improvements can compound over time, especially when you're working on multiple projects or dealing with frequent context switching.

If you're curious about what else GitKraken offers beyond the CLI, you can explore their full product lineup here. For those who decide the Pro features would benefit their workflow, as an ambassador of GitKraken I can share my code to provide a 50% discount for your GitKraken Pro subscription.

The combination of AI assistance and intuitive commands addresses real pain points that many developers face daily. Whether GitKraken CLI becomes a core part of your toolkit will depend on your specific workflow, but it's worth trying given that it's free and takes just a few minutes to set up.



The best tools are the ones that get out of your way and let you focus on building. GitKraken CLI aims to do exactly that.

I Co-Wrote 88 Unit Tests Using AI: A Developer's Journey

Testing has always been one of those tasks that developers know is essential but often find tedious. When I decided to add comprehensive unit tests to my NoteBookmark project, I thought: why not make this an experiment in AI-assisted development? What followed was a fascinating 4-hour journey that resulted in 88 unit tests, a complete CI/CD pipeline, and some valuable insights about working with AI coding assistants.

(Version française ici)

The Project: NoteBookmark

NoteBookmark is a .NET application built with C# that helps users manage and organize their reading notes and bookmarks. The project includes an API, a Blazor frontend, and uses Azure services for storage. You can check out the complete project on GitHub.

The Challenge: Starting from Zero

I'll be honest - it had been a while since I'd written comprehensive unit tests. Rather than diving in myself, I decided to see how different AI models would approach this task. My initial request was deliberately vague: "add a test project" without any other specifications.

Looking back, I realize I should have been more specific about which parts of the code I wanted covered. This would have made the review process easier and given me better control over the scope. But sometimes, the best learning comes from letting the AI surprise you.

The Great AI Model Comparison



GPT-4.1: Competent but Quiet

GPT-4.1 delivered decent results, but the experience felt somewhat mechanical. The code it generated was functional, but I found myself wanting more context. The explanations were minimal, and I often had to ask follow-up questions to understand the reasoning behind certain test approaches.

Gemini: The False Start

My experience with Gemini was... strange. Perhaps it was a glitch or an off day, but most of what was generated simply didn't work. I didn't persist with this model for long, as debugging AI-generated code that fundamentally doesn't function defeats the purpose of the exercise. Note that at the time of this writing, Gemini was still in preview, so I expect it to improve over time.

Claude Sonnet: The Clear Winner

This is where the magic happened. Claude Sonnet became my co-pilot of choice for this project. What set it apart wasn't just the quality of the code (though that was excellent), but the quality of the conversation. It felt like having a thoughtful colleague thinking out loud with me.

The explanations were clear and educational. When Claude suggested a particular testing approach, it would explain why. When it encountered a complex scenario, it would walk through its reasoning. I tried different versions of Claude Sonnet but didn't notice significant differences in results - they were all consistently good.

The Development Process: A 4-Hour Journey


Hour 1-2: Getting to Compilation

The first iteration couldn't compile. This wasn't surprising given the complexity of the codebase and the vague initial request. But here's where the AI collaboration really shined. Instead of manually debugging everything myself, I worked with Copilot to identify and fix issues iteratively.

We went through several rounds of:

  1. Identify compilation errors
  2. Discuss the best approach to fix them
  3. Let the AI implement the fixes
  4. Review and refine

After about 2 hours, we had a test project with 88 unit tests that compiled successfully. The AI had chosen xUnit as the testing framework, which I was happy with - it's a solid choice that I might not have picked myself if I was rusty on the current .NET testing landscape.

Hour 2.5-3.5: Making Tests Pass

Getting the tests to compile was one thing; getting them to pass was another challenge entirely. This phase taught me a lot about both my codebase and xUnit features I wasn't familiar with.

I relied heavily on the /explain feature during this phase. When tests failed, I'd ask Claude to explain what was happening and why. This was invaluable for understanding not just the immediate fix, but the underlying testing concepts.

One of those moment was learning about [InlineData(true)] and other xUnit data attributes. These weren't features I was familiar with, and having them explained in context made them immediately useful.


InlineData in the code


Hour 3.5-4: Structure and Style

Once all tests were passing, I spent time ensuring I understood each test and requesting structural changes to match my preferences. This phase was crucial for taking ownership of the code. Just because AI wrote it doesn't mean it should remain a black box. Let's repeat this: Understanding the code is essential; just because AI wrote it doesn't mean it's good.

Beyond Testing: CI/CD Integration

With the tests complete, I asked Copilot to create a GitHub Actions workflow to run tests on every push to main and v-next branches, plus PR reviews. Initially it started modifiying my existing workflow that takess care of the Azure deployment. I wanted a separate workflow for testing, so I interrupted (that's nice I wasn't "forced" to wait), and asked it to create a new one instead. The result was the running-unit-tests.yml workflow that worked perfectly on the first try.

This was genuinely surprising. CI/CD configurations often require tweaking, but the generated workflow handled:

  • Multi-version .NET setup
  • Dependency restoration
  • Building and testing
  • Test result reporting
  • Code coverage analysis
  • Artifact uploading

Code coverage


The PR Enhancement Adventure

Here's where things got interesting. When I asked Copilot to enhance the workflow to show test results in PRs, it started adding components, then paused and asked if it could delete the current version and start from scratch.

I said yes, and I'm glad I did. The rebuilt version created beautiful PR comments showing:

  • Test results summary
  • Code coverage reports (which I didn't ask for but appreciated)
  • Detailed breakdowns.

PR display


The Finishing Touches

No project is complete without proper status indicators. I added a test status badge to the README, giving anyone visiting the repository immediate visibility into the project's health.

test status badge


Key Takeaways


What Worked Well

  1. AI as a Learning Partner: Having Copilot explain testing concepts and xUnit features was like having a patient teacher
  2. Iterative Refinement: The back-and-forth process felt natural and productive
  3. Comprehensive Solutions: The AI didn't just write tests; it created a complete testing infrastructure
  4. Quality Over Speed: While it took 4 hours, the result was thorough and well-structured

What I'd Do Differently

  1. Be More Specific Initially: Starting with clearer scope would have streamlined the process
  2. Set Testing Priorities: Identifying critical paths first would have been valuable
  3. Plan for Visual Test Reports: Thinking about test result visualization from the start

Lessons About AI Collaboration

  • Model Choice Matters: The difference between AI models was significant
  • Conversation Quality Matters: Clear explanations make the collaboration more valuable
  • Trust but Verify: Understanding every piece of generated code is crucial
  • Embrace Iteration: The best results come from multiple refinement cycles

The Bigger Picture

This experiment reinforced my belief that AI coding assistants are most powerful when they're true collaborators rather than code generators. The value wasn't just in the 88 tests that were written, but in the learning that happened along the way.

For developers hesitant about AI assistance in testing: this isn't about replacing your testing skills, it's about augmenting them. The AI handles the boilerplate and suggests patterns, but you bring the domain knowledge and quality judgment.

Conclusion

Would I do this again? Absolutely. The combination of comprehensive test coverage, learning opportunities, and time efficiency made this a clear win. The 4 hours invested created not just tests, but a complete testing infrastructure that will pay dividends throughout the project's lifecycle.

If you're considering AI-assisted testing for your own projects, my advice is simple: start the conversation, be prepared to iterate, and don't be afraid to ask "why" at every step. The goal isn't just working code - it's understanding and owning that code.

The complete test suite and CI/CD pipeline are available in the NoteBookmark repository if you want to see the results of this AI collaboration in action.


Full-Stack Azure Deployment Made Easy: Containers, Databases, and More

Automating deployments is something I always enjoy. However, it's true that it often takes more time than a simple "right-click deploy." Plus, you may need to know different technologies and scripting languages.

(Version française ici)

But what if there was a tool that could help you write everything you need—Infrastructure as Code (IaC) files, scripts to copy files, and scripts to populate a database? In this post, we'll explore how the Azure Developer CLI (azd) can make deployments much easier.

What do we want to do?

Our goal: Deploy the 2D6 Dungeon App to Azure Container Apps.

This .NET Aspire solution includes:

  • A frontend
  • A data API
  • A database

Aspire resources schema


The Problem

In a previous post, we showed how azd up can easily deploy web apps to Azure.

If we try the same command for this solution, the deployment will be successful—but incomplete:

  • The .NET Blazor frontend is deployed perfectly.
  • However, the app fails when trying to access data.
  • Looking at the logs, we see the database wasn't created or populated, and the API container fails to start.

Let's look more closely at these issues.

The Database

When running the solution locally, Aspire creates a MySQL container and executes SQL scripts to create and populate the tables. This is specified in the AppHost project:

var mysql = builder.AddMySql("sqlsvr2d6")
                   .WithLifetime(ContainerLifetime.Persistent);
                
var db2d6 = mysql.AddDatabase("db2d6");

mysql.WithInitBindMount(source: "../../database/scripts", isReadOnly: false);

When MySQL starts, it looks for SQL files in a specific folder and executes them. Locally, this works because the bind mount is mapped to a local folder with the files.

However, when deployed to Azure:

  • The mounts are created in Azure Storage Files
  • The files are missing!

The Data API

This project uses Data API Builder (dab). Based on a single config file, a full data API is built and hosted in a container.

Locally, Aspire creates a DAB container and reads the JSON config file to create the API. This is specified in the AppHost project:

var dab = builder.AddDataAPIBuilder("dab", ["../../database/dab-config.json"])
                .WithReference(db2d6)
                .WaitFor(db2d6);

But once again, when deployed to Azure, the file is missing. The DAB container starts but fails to find the config file.

Logs of DAB failing to start


The Solution

The solution is simple: the SQL scripts and DAB config file need to be uploaded into Azure Storage Files during deployment.

You can do this by adding a post-provision hook in the azure.yaml file to execute a script that uploads the files. See an example of a post-provision hook in this post.

Alternatively, you can leverage azd alpha features: azd.operations and infraSynth.

  • azd.operations extends the provisioning providers and will upload the files for us.
  • infraSynth generates the IaC files for the entire solution.

💡Note: These features are in alpha and subject to change.

Each azd alpha feature can be turned on individually. To see all features:

azd config list-alpha

To activate the features we need:

azd config set alpha.azd.operations on
azd config set alpha.infraSynth on

Let's Try It

Once the azd.operation feature is activated, any azd up will now upload the files into Azure. If you check the database, you'll see that the db2d6 database was created and populated. Yay!

However, the DAB API will still fail to start. Why? Because, currently, DAB looks for a file, not a folder, when it starts. This can be fixed by modifying the IaC files.

One Last Step: Synthesize the IaC Files

First, let's synthesize the IaC files. These Bicep files describe the required infrastructure for our solution.

With the infraSynth feature activated, run:

azd infra synth

You'll now see a new infra folder under the AppHost project, with YAML files matching the container names. Each file contains the details for creating a container.

Open the dab.tmpl.yaml file to see the DAB API configuration. Look for the volumeMounts section. To help DAB find its config file, add subPath: dab-config.json to make the binding more specific:

containers:
    - image: {{ .Image }}
      name: dab
      env:
        - name: AZURE_CLIENT_ID
          value: {{ .Env.MANAGED_IDENTITY_CLIENT_ID }}
        - name: ConnectionStrings__db2d6
          secretRef: connectionstrings--db2d6
      volumeMounts:
        - volumeName: dab-bm0
          mountPath: /App/dab-config.json
          subPath: dab-config.json
scale:
    minReplicas: 1
    maxReplicas: 1

You can also specify the scaling minimum and maximum number of replicas if you wish.

Now that the IaC files are created, azd will use them. If you run azd up again, the execution time will be much faster—azd deployment is incremental and only does "what changed."

The Final Result

The solution is now fully deployed:

  • The database is there with the data
  • The API works as expected
  • You can use your application!
2D6 Dungeon App deployed


Bonus: Deploying with CI/CD

Want to deploy with CI/CD? First, generate the GitHub Action (or Azure DevOps) workflow with:

azd pipeline config

Then, add a step to activate the alpha feature before the provisioning step in the azure-dev.yml file generated by the previous command.

- name: Extends provisioning providers with azd operations
  run: azd config set alpha.azd.operations on     

With these changes, and assuming the infra files are included in the repo, the deployment will work on the first try.

Conclusion

It's exciting to see how tools like azd are shaping the future of development and deployment. Not only do they make the developer's life easier today by automating complex tasks, but they also ensure you're ready for production with all the necessary Infrastructure as Code (IaC) files in place. The journey from code to cloud has never been smoother!

If you have any questions or feedback, I'm always happy to help—just reach out on your favorite social media platform.

In Video

Here the video version of this blog post.


References