Configuring serverless YAML for API Gateway, Lambda, SQS, SNS

The last decade has seen almost all organisations, migrate or develop application to cloud infrastructure. The application architecture related to platform and hosting evolved from hybrid, multi-cloud and serverless. The main driver behind adoption of serverless architecture was auto-scaling capability at optimized cost with minimal maintenance and no charge when idle. The highlight is “build more and manage less” and multi-language support.

In this post and subsequent ones, I have attempted to provide information to solve the issues I had stumbled upon and solution derived from different links in the web. For now I am focussing on NodeJS as the programming language. Serverless architecture here is realised by writing a serverless configuration file, serverless.yml and installing the serverless node module. A little knowledge of deployment templates, like cloud formation is good to know, to start using serverless. In fact, in depth knowledge of cloud formation will help when we need to attempt something new not available in serverless documentation. At the moment of writing the post, I found the documentation is good, but require more to complete many more use cases. Here it becomes a trial and error in trying different options and making it work. I have has the opportunity to try serverless on AWS, so all the information here is related to the single environment, AWS. I am planning to break this up in sections as:

  • Setting up serverless in a nodejs environment
  • The basic building blocks of serverless configuration file
  • A serverless configuration for api gateway, lambda and sqs (the current post)
  • Provide observability and operational analytics using cloudwatch logs and alarms, and configuration of the same
  • Securing credentials using encoding and decoding secrets
  • drive continuous integration and deployment for serverless using github actions
  • Use Serverless offline for development in local environment

Setting up serverless in a nodejs environment – This involves the following steps:

  • Install NodeJs, most likely versions beyond 6.
  • Install the dependencies for serverless
  • Setup the serverless CLI

Setup of a serverless CLI involves installing serverless as self executable mobile locally in our development machine.

npm install -g serverless

The above command installs serverless globally, I normally resort to invoking serverless executable found within the node_modules folder. This will be available after an npm install command is run. serverless.js is same as sls command that you will find on web. Going this way allows me to maintain the local version of serverless for each project I am working on. Different projects run on different nodejs and serverless versions.

$ ./node_modules/serverless/bin/serverless.js

Serverless documentation also talk about connecting to dashboard server, that is useful for managing multiple applications across single AWS account, but I have skipped serverless dashboard SaaS application during my development. It is useful nevertheless. This will invoke a CLI interactive session, asking us to choose dashboard, and it will initialize a serverless project for us. (https://www.serverless.com/framework/docs/getting-started).

We can now look into serverless configuration file in the next blog.

Website setup using Cloudflare, Cloudfront and S3 – Ways to debug issues

When we create a website in cloud, we look for infrastructure services like storage as our origin, CDN network to expose our content over the internet, a domain name set up and transport level certificates for encrypted transport over the network. When we go with Amazon services, these can S3 for storage, Cloudfront for CDN, AWS Route 53 to register domains, and AWS certificate manager to issue certificates for SSL transport. Sometimes we end choosing an external domain registration service e.g. cloudflare, and then direct content from there to cloudfront. This choice could be made due to cost or legacy setup being available. Here is the link to from cloudflare on how to do this. (https://developers.cloudflare.com/support/third-party-software/others/configuring-an-amazon-web-services-static-site-to-use-cloudflare/) In this blog, we will focus on certain key configurations of cloudflare for our domain name resolution, cloudfront and content present in S3.

Designing such a system depends on the nature of our website, in terms of how dynamic the content is, the traffic, their geolocation and security. Beyond our three components, S3, cloudfront and cloudflare, we will require a build system (Jenkins or Github actions) to generated automated builds and deploy them which is triggered by code checkins. The sequence of events can be listed as:

  • Setting up an S3 bucket, copying out contents into S3
  • Setting up cloudfront and point its origin to our newly created S3, setup certificates
  • Set up Cloudflare for our new domain name and point it to cloudflare generated URL.

Setting up an S3 bucket will require options to be configured like archiving, versioning, encryption and hosting options. In this blog we will consider only the hosting option. We will enable static website hosting` and hosting type as static website hosting. Make sure to name the starting page and error pages in the appropriate sections.

The next important section is to configure the permissions to s3. We normally want to point only the requests from cloudfront to have access to read operations on this s3 bucket. We do not want any public access. while debugging we can provide public access, so that the URLs can be tested for different flows and responses. If our functionality works here, we can say our origin is fine. Otherwise a production ready system can be configured with permissions as below:

{
"Version": "2008-10-17",
"Id": "PolicyForOurCloudFrontOnlyAccess",
"Statement": [
{
"Sid": "1",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity CLOUDFRONTENTITY"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::our-s3-bucket/*"
}
]
}

“PolicyForOurCloudFrontOnlyAccess” is what we have named our policy, this can be anything of your choice. The principal is the identity of cloudfront entity, we will get it once we create a cloudfront setup. We can keep this policy section empty and configure it once the entity is created in cloudfront. “arn:aws:s3:::our-s3-bucket” is the ARN of the bucket we newly created.

The next step is to configure cloudFront. In cloudfront we configure the distribution, we create a new distribution. Most of the parameters are straghtforward like the protocol (http,https), georestrictions etc. We have to configure the origin to the new s3 bucket we created. We can leave alternate domain names and certifcates empty. This will create a distribution domain name itself, and we will use that to test our setup. We will have to set up origin related parameters, we have to create a new origin access identity, we will use this identity policy to configure S3 bucket access policy which had left empty earlier. With this done, only our cloudfront instance will be the only identity to access the S3 bucket. We also configure the behaviour, like the nature of caching etc. Now we have our distribution ready. We can use the browser to bring up the url, and test it This is the second phase of debugging, and if origin was fine with our tests, any other problem has to be looked into this cloudwatch component.

We normally have a specific domain name to be associated with sites e.g. learn.mylearningdomain.com. In that case we will have to configure alternate domain names in cloudfront. e.g. we can set up *.mylearningdomain.com, or learn.mylearning.com. For this site to be accessed using https, we will need to provide certificates for transport. Before loading the certificates or creating new ones, let us see how the certificates have to be configured.

We have to secure our transport route completely. These can be done by buying certificates from registered certificate authorities, or using self signed certificates from any of these providers. The self signed certificates work only if the receiving component is the same. e.g. if we use self-signed certificate for transport between cloudflare and cloudfront, it will not work. In our case we generate TLS certificates from cloudfront. We have to make sure that the certificates support our alternate domain names, while we request them. We can request them from AWS certificate manager, and we register the domains with self signed certificates for transport between user and cloudflare DNS.

We have to map our domain name in cloudflare as an APEX and a CNAME. I will explain these in another blog. Finally we map the CNAMEs with our cloudflare domain end points that we generated. We also need to load our certificates into cloudflare. Now we are mostly done. We should be able to view the site in our browser using https://learn.mylearningdomain.com.

Finally I would like to mention a point where deeplinks were not working eventhough the entire flow within the site was working. e.g. all urls when accessed using internal redirects were working without any issue, but if we click any URL directly https://learn/.mylearningdomain.com/aboutus we used to get access denied exception. Our application was a single page webapp, and there is an issue when we integrate cloudflare and cloudfront for single page website with deeplinks. We have to configure error pages, in the error page tab of cloudfront, to redirect to index.html with 200 response code. All links then worked fine. Here is the link to the answer.

Android Studio Setup – Build Variants Not Showing up

I always faced some issue or the other when I tried to set up a project in Android Studio, be it Kotlin or react-native. I have already mentioned some of the issues and how I solved them for react-native applications. In this post I will mention what all issues one can expect while configuring Android Studio for Kotlin projects. The information here holds good for both Windows and MacOS platforms. I have tested these solutions on Windows 10, Windows 11 and MacOS 13 Ventura.

Either we pickup a project from git sources, or open up a folder with source files, we need the project setup display the different build configurations. While we do this, we see a lot of initialization activity going on Android namely setting up the jdk, setting up gradle, project configuration etc. The final outcome is the Android IDE window where we can see in the top build section our build configuration selected. In the left side we have our product variants filled up based on whatever modules the project was built with.

This is an area errors occur for developers new to this platform. It is important to understand the dependencies, the build outputs, the caches and the sequence of operations. The best way to understand this is by listing down the dependencies required to build an Android App:

  • JVM-JDK – This is the starting point, and most of our desktops have JAVA versions sprawling everywhere. In many situations Android studio will pick up a version based on what the system provides by default.
  • Maven, gradle – These build tools have to be configured based on the JDK version we have. Improper configuration will lead to storage build errors. Gradle tool can be configured in two ways, we can have our own local gradle tool, or we can configure in our project using a gradle wrapper file which is located in gradle folder in the top level project. It has a gradle-wrapper.jar and gradle-wrapper.properties. Please note the distributionUrl. This is where gradle libraries are downloaded. This library has to be in sync with Java runtime we have.(https://docs.gradle.org/current/userguide/gradle_wrapper.html). In Android studio there is a reference to gradle user home. It is important to ensure all the common gradle files for use across projects are stored here and there is no conflict.
distributionBase=GRADLE_USER_HOME
distributionUrl=https\://services.gradle.org/distributions/gradle-7.2-bin.zip
distributionPath=wrapper/dists
zipStorePath=wrapper/dists
zipStoreBase=GRADLE_USER_HOME
  • Android SDK -Android SDK has all the libraries that are responsible for processing the API calls make to the Andorra platform. This also has platform tools and emulator code binaries. It is required to ensure that this SDK, SDK tools. In short, the emulator, the platform-tools, tools and tools/bin has to be in system path for the platform commands to run. In the project folder, the file local.properties has the link to SDK location (the property sdk.dir maps to sdk location)
  • Android Studio IDE structure, caches – Gradle installation sets up a gradle home folder where it downloads the plug in jar files required for the different projects that use gradle. Whenever calls to such library happen, the classes are first loaded from this cache. In order. to invalidate the cache, we can either delete this contents in this folder or we can use the “Invalidate Cache” option in Android Studio. Whenever we invalidate caches, all our projects that are loaded by Android Studio will download all the gradle distribution.
  • kotlin plugins – Kotlin plugin versions are defined in the build.gradle file mentioned at project level. id 'org.jetbrains.kotlin.android' version '1.6.21' apply false.
  • compose and other android frameworks – These will be part of project specific gradle dependency, these will not cause any issue with IDE bringing up the different flavours

Different Possibilities

Sometimes our gradle plugin files might reference Java 11 where as we might reference Java 1.8 in our project (I mean different versions). These have to be in sync. We need to check what our gradle JDK is being references in Settings/Build environment/gradle as well as the application references in build.gradle. It is always better to delete the .idea and .gradle files which are created by Studio, and do a fresh project sync with gradle. Sometimes it will be worthwhile to look at gradle settings provided in the Build output window, click on “Gradle Settings” link. You will get this by clicking on error message on the left side of split window pane.

An exception occurred applying plugin request [id: 'com.android.application']
> Failed to apply plugin 'com.android.internal.application'.
   > Android Gradle plugin requires Java 11 to run. You are currently using Java 1.8.
     You can try some of the following options:
       - changing the IDE settings.
       - changing the JAVA_HOME environment variable.
       - changing `org.gradle.java.home` in `gradle.properties`

--------------- My build.gradle file----------
compileOptions {
        sourceCompatibility JavaVersion.VERSION_1_8
        targetCompatibility JavaVersion.VERSION_1_8
    }
    kotlinOptions {
        jvmTarget = '1.8'
    }

“Some Kotlin libraries attached to this project were compiled with a newer Kotlin compiler and can’t be read. please update Kotlin plugin”

Ensure that the compose version is in sync with Kotlin plugin version. Note the org.jetbrains.kotlin.android version.

buildscript {
    ext {
        compose_version = '1.0.1'
    }
}// Top-level build file where you can add configuration options common to all sub-projects/modules.
plugins {
    id 'com.android.application' version '7.1.1' apply false
    id 'com.android.library' version '7.1.1' apply false
    id 'org.jetbrains.kotlin.android' version '1.5.21' apply false
}

task clean(type: Delete) {
    delete rootProject.buildDir
}

Here I have tried to provide as many cases that can result in product flavors not appear in Android Studio IDE.

Gradle sync failed: This version of the Android Support plugin for IntelliJ IDEA (or Android Studio) cannot open this project

Try to change the distributionUrl in gradle-wrapper.properties to a lower version. If you still get an error with gradle versions not being in sync, downgrade the version plugins in build.gradle. e.g.

plugins {
id ‘com.android.application’ version ‘7.1.1’ apply false
id ‘com.android.library’ version ‘7.1.1’ apply false
id ‘org.jetbrains.kotlin.android’ version ‘1.6.21’ apply false
}

I will keep updating this as I go ahead and meet new scenarios. In case you find something not mentioned here, please let me know, I will add them here too.

Install nvm npm node on wsl2

A prerequisite to install any package involves an update of existing package manager, update its repository or package lists. It is not uncommon to get errors like the ones below, e.g the one below warns that we do not have a release file for ‘focal release’ for pulse audio. If we do not require this package, it will be good to remove this entry from the packages to be updated. Most of the issues can be possibly around ppa’s (personal package archives), as the name suggest these are users who can provide their own source code of libraries that can be included along with other libraries. This is done using launchpad which is the hosting platform for the free services.

sudo apt update && sudo apt upgrade
Reading package lists... Done
E: The repository 'http://ppa.launchpad.net/therealkenc/wsl-pulseaudio/ubuntu focal Release' does not have a Release file.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.

In my case I wanted to stream media also over wsl, so I required pulse-sudio, but then this driver was not updated, so for the present I just removed it using the commands below. Whenever you hit a roadblock with the update not working, try to see if the download site is working and we are able to get the details. In this case I had to remove a ppa, which can be done by (https://askubuntu.com/questions/307/how-can-ppas-be-removed). As mentioned in the link the files are contained with .list extension the directory /etc/apt/sources.list.d. The idea of managing these lists is also mentioned here. https://itsfoss.com/repository-does-not-have-release-file-error-ubuntu/

sudo add-apt-repository --remove ppa:whatever/ppa

The next step is to install nvm using the install.sh script available for download from GitHub as raw downloadable files.

sudo add-apt-repository ppa:therealkenc/wsl-pulseaudio
sudo add-apt-repository --remove ppa:therealkenc/wsl-pulseaudio
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.3/install.sh | bash
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
curl: (35) error:140943E8:SSL routines:ssl3_read_bytes:reason(1000)

nvm install was failing with the above error. My ssl was set up properly, so not sure why this download errored. The workaround was to directly download the file or copy paste the contents of the file install.sh in the location https://github.com/nvm-sh/nvm/blob/master/install.sh locally and run them. nvm was installed. Once nvm install is done node install is straightforward. we can use an existing version, or install a new version.

Here is a good explanation from nvm team on how to install node or manage node with nvm. The npm is usually bundled and have the corresponding version.

$ nvm use 16
Now using node v16.9.1 (npm v7.21.1)
$ node -v
v16.9.1
$ nvm use 14
Now using node v14.18.0 (npm v6.14.15)
$ node -v
v14.18.0
$ nvm install 12
Now using node v12.22.6 (npm v6.14.5)
$ node -v
v12.22.6

Upgrade to api 31 for react native apps in Android

Android push to playstore targetsdk upgrade

Google has mandated targetsdk 31 and any api below 31 cannot be deployed in play store. In this blog I have mentioned how I set up my development environment in MacOS, for a quick upgrade in 30 mins time. I developed an application using react native. Initially my application was set up with targetSDK 30. When I submitted this to play store, here is what I received as error message:

“Your app currently targets API level 30 and must target at least API level 31 to ensure it is built on the latest APIs optimized for security and performance. Change your app’s target API level to at least 31

Here’s the guide from google: https://developer.android.com/google/play/requirements/target-sdk#pre12

A typical react native application has build scripts using grade as a build tool. Android Studio is the best way to setup the environment, though command line tools along with SDKManager can also be used to setup the environment. There are changes to be made to our development environment, our build files.

Here is the modification to gradle files. The build tools version and ndk have been chosen based on the latest that works with this combination. The minSDK version, can be changed to the requirement. the compileSdk version can be setup for higher targets as a matter of good practice. (build.gradle)

buildscript {
    ext {
        buildToolsVersion = "30.0.3"
        minSdkVersion = 21
        compileSdkVersion = 31
        targetSdkVersion = 31
        ndkVersion = "25.1.8937393"
    }

There are certain changes required to Android manifest also as we have to specify Intent behaviour as well.

<activity
        android:name=".MainActivity"
        android:label="@string/app_name"
        android:exported="true"
        android:configChanges="keyboard|keyboardHidden|orientation|screenSize|uiMode"
        android:launchMode="singleTask"
        android:windowSoftInputMode="adjustResize">

The next set of changes are required to set up our environment in place. This SDK requires a Java upgrade as compared to earlier versions, we required Java 8, but here we require Java 11. So following changes are required

JAVA_HOME environment variable to point to java 11

The system path should pick up Java 11 SDK binaries. For MacOS I used home-brew to install openjdk11.

Taken form the link - https://formulae.brew.sh/formula/openjdk@11
https://github.com/Homebrew/discussions/discussions/2405

For the system Java wrappers to find this JDK, symlink it with
  sudo ln -sfn /usr/local/opt/openjdk@11/libexec/openjdk.jdk /Library/Java/JavaVirtualMachines/openjdk-11.jdk

openjdk@11 is keg-only, which means it was not symlinked into /usr/local,
because this is an alternate version of another formula.

If you need to have openjdk@11 first in your PATH, run:
  echo 'export PATH="/usr/local/opt/openjdk@11/bin:$PATH"' >> ~/.zshrc

For compilers to find openjdk@11 you may need to set:
  export CPPFLAGS="-I/usr/local/opt/openjdk@11/include"

The error below was resolved once Java version was changes

> Task :react-native-device-info:compileDebugJavaWithJavac FAILED

143 actionable tasks: 20 executed, 123 up-to-date

An exception has occurred in the compiler (1.8.0_311). Please file a bug against the Java compiler via the Java bug reporting page (http://bugreport.java.com) after checking the Bug Database (http://bugs.java.com) for duplicates. Include your program and the following diagnostic in your report. Thank you.

java.lang.AssertionError: annotationType(): unrecognized Attribute name MODULE (class com.sun.tools.javac.util.UnsharedNameTable$NameImpl)

If android manifest does not have declarations for intent, the error below might be seen.

error Failed to install the app. Make sure you have the Android development environment set up: https://reactnative.dev/docs/environment-setup.

Error: Command failed: ./gradlew app:installDebug -PreactNativeDevServerPort=8081

/Users/../AndroidManifest.xml Error:

Apps targeting Android 12 and higher are required to specify an explicit value for `android:exported` when the corresponding component has an intent filter defined. See https://developer.android.com/guide/topics/manifest/activity-element#exported for details.

All the above errors are seen when we do an:

npm start 
npm run android

For one last time I got an out of memory error, but then this was a one off situation and got resolved automatically in the second run.

Execution failed for task ‘:app:packageDebug’.

> A failure occurred while executing com.android.build.gradle.tasks.PackageAndroidArtifact$IncrementalSplitterRunnable

   > java.lang.OutOfMemoryError (no error message)

After this I was able to get my app to run. I hope this works for you too. Let me know if you have issues.

——————–

Links referred

https://stackoverflow.com/questions/68387270/android-studio-error-installed-build-tools-revision-31-0-0-is-corrupted

https://stackoverflow.com/questions/67412084/android-studio-error-manifest-merger-failed-apps-targeting-android-12

https://stackoverflow.com/questions/68344424/unrecognized-attribute-name-module-class-com-sun-tools-javac-util-sharednametab

https://developer.android.com/studio/command-line/sdkmanager

https://docs.oracle.com/en/java/javase/11/install/installation-jdk-macos.html#GUID-E8A251B6-D9A9-4276-ABC8-CC0DAD62EA33

https://formulae.brew.sh/formula/openjdk@11

Ruby Rails bundle update errors on MacOS

While setting up rails project we mostly use home-brew to install some of our dependent components like Postgres, or redis. Sometimes brew update or upgrade results in breaking o some of our existing applications, services and components. Here I am listing just a table of some of the errors, with a timeline, we can use to see how relevant the problem or solution is to the one we have now.

Date issue facedIssue resolutioncomments and related links
Nov 2021postgres on MacOS – PGSQL.5432″ failed
in `initialize’: connection to server on socket “/tmp/.s.PGSQL.5432” failed: No such file or directory (PG::ConnectionBad)
brew postgresql-upgrade-database
Sep 2022.somedir./.rbenv/versions/2.7.2/lib/ruby/gems/2.7.0/gems/bootsnap-1.4.5/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:22:in `require’: dlopen(.somedir./.rbenv/versions/2.7.2/lib/ruby/gems/2.7.0/gems/pg-1.2.2/lib/pg_ext.bundle, 0x0009): Library not loaded: ‘/usr/local/opt/postgresql/lib/libpq.5.dylib’ (LoadError)
  Referenced from: ‘.somedir./.rbenv/versions/2.7.2/lib/ruby/gems/2.7.0/gems/pg-1.2.2/lib/pg_ext.bundle’
  Reason: tried: ‘/usr/local/opt/postgresql/lib/libpq.5.dylib’ (no such file), ‘/usr/local/lib/libpq.5.dylib’ (no such file), ‘/usr/lib/libpq.5.dylib’ (no such file) – .somedir./.rbenv/versions/2.7.2/lib/ruby/gems/2.7.0/gems/pg-1.2.2/lib/pg_ext.bundle
if you have upgraded
PostgreSQL with homebrew (brew update && brew upgrade),
macOS (eg. from Catalina to BigSur)
then simply uninstall the pg gem:
gem uninstall pg
bundle install
and the path will be corrected for you. No need to uninstall the whole PostgreSQL cluster.
Sep 2022MISCONF Redis is configured to save RDB snapshots, but it’s currently unable to persist to disk.After a home-brew upgrade please stop and start services, this sounds very common sense but I missed it

Setting up Opentelemetry collector, Jaeger in Kubernetes in a Docker Desktop environment

As part of setting up Opentelemetry in my rust application and test it locally, I chose to use Docker-Desktop, its Kubernetes environment and my rust application, all running locally. The part of setting up my application to send traces directly to Jaeger, deployed as a docker was easy. This just required the docker port to be set up as 4217, the default collector port, and expose it to the application using the docker commands. This becomes tricky when I had to set up a collector in between that can forward requests to Jaeger. I needed all of them in a single dns boundary. I chose Kubernetes over Docker Swarm to do this with no specific reason to start with. Now I was looking for Jaeger configuration files for Kubernetes, Otel-Collector configuration for Kubernetes. There was a difficulty in configuring and deploying both of them. Here are few of the changes I made to make them work. Both Jaeger and Opentelemetry collector are deployed in the observability namespace. I used helm charts to deploy Jaeger, and kubectl to directly deploy open telemetry collector. The steps to deploy Jaeger has two steps, deploy the Jaeger operator and then deploy Jaeger.

helm repo add jetstack https://charts.jetstack.io
helm repo add jaegertracing https://jaegertracing.github.io/helm-charts

helm install \
  cert-manager jetstack/cert-manager \
  --namespace observability \
  --create-namespace \
  --version v1.8.0 \

helm install myrelease jaegertracing/jaeger-operator -n observability

helm install jaeger-all-in-one jaeger-all-in-one/jaeger-all-in-one -n observability

Jaeger also requires cert-manager to be installed, which is the line 3, and then we will have to port forward set up before Jaeger UI is deployed. for additional details please refer https://cert-manager.io/docs/installation/helm/#option-2-install-crds-as-part-of-the-helm-release

There were issues using jaegertracing/jaeger charts as it was giving errors like “…ensure CRDs are installed first…”. So I chose jaeger-all-in-one.

Next we need to install opentelemetry-collector. I am deploying it in the same namespace “observability”. we will need to modify the standard open telemetry-collector yaml, to point to the pods name of jaeger-all-in-one e.g.

Here is where I downloaded the open-telemetry collector Yaml from – https://github.com/open-telemetry/opentelemetry-collector/blob/main/examples/k8s/otel-config.yaml. The 14250 port is being used to send messages using jaeger.

Change 1

    exporters:
      jaeger:
        endpoint: "jaeger-all-in-one:14250" # Replace with a real endpoint.
        tls:
          insecure: true

Change 2

service:
      extensions: [zpages, memory_ballast]
      pipelines:
        traces:
          receivers: [otlp]
          processors: [memory_limiter, batch]
          exporters: [jaeger]

Now this can be loaded using kubectl in the observability namespace.

kubectl apply -f Otel-collector.yaml -n observability

After this we need to set up few port forwarding, which can be done as follows, this is strictly for local development sandboxes and not a secure production implementation

kubectl --namespace observability port-forward $POD_NAME 16686:16686
kubectl --namespace observability port-forward $POD_NAME 4317:4317

These can be spawned in 2 different terminals. After that we can load the UI in a browser http://127.0.0.1:16686 and let us start our app to send the traces to the open telemetry collector. Though this Otel-collector.yaml deploys Otel-agent as a daemonset, I directly sent requests to collector which is deployed as a service on port 4317. The collector is internally configured to forward requests to Jaeger collector on 14250 as grpc.

Debugging Ruby’s Sidekiq scheduled jobs

This blog will be useful for developers new to the concept of scheduling with Sidekiq. Sidekiq framework helps us to run ruby jobs in background, more about this is available here. A typical Sidekiq job set up will require a job definition (greet_all.rb), which is called within an implementation of a Sidekiq worker (greet_all_worker.rb). This worker is scheduled in a file called schedule.yml. This location of this file is loaded in Sidekiq as an initialiser (Sidekiq.rb). Here is how some of these files will look like.

#apps/models/greet_all.rb
class GreetAll
   def self.greet
     print "Hello All"
   end
end

#apps/workers/greet_all_worker.rb
class GreetAllWorker
  include Sidekiq::Worker

  sidekiq_options retry: true

  def perform(*args)
    GreetAll.greet
  end
end

#create config/schedule.yml
process_greet_all_worker:
  cron: "0 0 23 * * * Asia/Kolkata" #if required to run at 11pm IST
  #cron: "*/2 * * * *" #if required to run every 2 mins
  class: "GreetAllWorker"
  queue: low

#Add to initializers/sidekiq.rb
schedule_file = 'config/schedule.yml'

if File.exist?(schedule_file) && Sidekiq.server?
  Sidekiq::Cron::Job.load_from_hash YAML.load_file(schedule_file)
end

Debugging this program can be done at 3 levels:

The first level is one where we directly check how our program is working. This can be done using rails console, and calling the method in our case GreetAll.greet.

Next we can try to see if our files and schedule is set up properly (an entire call triggered by system works fine). This can be done by scheduling a run at a defined interval. The commented line of the cron allows the process to be called every 2 mins.

Once we are through with this, the final test is to verify if the cron scheduling is proper. This is also easy but the only issue is that the documentation around cron is a little misleading. In most of the places, I have seen the * * * * * * (6 stars) to denote minute, hour, day, week and so on. but then this did not work for me. It starts with seconds, minutes, hour, day and so on.

The best way to understand this is to use the library that ruby uses to parse the cron and date times. Fugit is the library used by Sidekiq, and it provides an easy way to check what our configuration means. We can use rails console to try the same. Let us feed our cron information to the Fugit library as below. We can verify this information and then we are ready to ship our code.

>require 'fugit'
=> false
>c = Fugit::Cron.parse('0 29 11 * * * America/Los_Angeles')
=> #<Fugit::Cron:0x00007fd6fa335698 @original="0 29 11 * * * America/Los_Angeles", @cron_s=nil, @seconds=[0], @minutes=[29], @hours=[11], @monthdays=nil, @months=nil, @weekdays=nil, @zone="America/Los_Angeles",

One point to note is that we will need to clear our redis cache, the moment a change is made to schedule.yml files. The cron jobs will then be reloaded into the database. An easy way to do this is

redis-cli flushall

Some useful links are

https://medium.com/serpapi/ruby-schedulers-whenever-vs-sidekiq-cron-vs-sidekiq-scheduler-b229d7ca5256

https://github.com/floraison/fugit

https://stackoverflow.com/questions/24886371/how-to-clear-all-the-jobs-from-sidekiq

https://github.com/floraison/fugit

mariadb – mysql server does not start on macOS – kill – No such process

Recently I had to install mariadb server on macOS. I did follow the following instructions from the site. Once installation was over I was able to access using the mysql client on command line. Next day I was unable to start my server, I used the following options

mysql.server start

brew services start mariadb

brew services restart mariadb

For brew services I used to get stopped status when I listed the services, and for mysql.server start I used to get the following message

/usr/local/bin/mysql.server: line 264: kill: (4542) – No such process

I did try sudo option also but did not help. The best way is the one mentioned here, we need to delete the log files as mentioned below.

  • Stop MySQL / MariaDB.
  • Go to /usr/local/var/mysql
  • Delete ib_logfile0 & ib_logfile1 files.
  • Try to start now, It should work

Thanks to the guys who put this together. https://gist.github.com/irazasyed/a74766108b4630fc5c7c822df23526e8

React Native, Firebase- Android Gradle Error -cannot find symbol return BuildConfig.DEBUG or No matching client found

Whenever we try to build react-native application with firebase module and use messaging, then we use the following steps:

  • Set up a package in the Android source code
  • Use the same package name to create an application in Firebase console
  • Download the google.json and place it in the android folder.
  • Ensure that applicationId in build.gradle default config has the right package name, this is for grade version 4.2.2
  • This same package name is to be present AndroidManifest.xml
  • The Mainactivity.java, MainApplication.java package name have to be same as one mentioned in the AndroidManifest.xml, build.gradle, google.json
  • The files MainActivity.java and MainApplication.Java require the same folder structure as mentioned in the package.
  • Ensure that react-native is started with reset-cache option

npm start --reset-cache

We can try to remove node_modules and do an npm install again.

rm -rf node_modules
npm install

The main idea is to make package name consistent across AndroidManifest, build.grade, google.json, java application references and folder structure

https://stackoverflow.com/questions/34990479/no-matching-client-found-for-package-name-google-analytics-multiple-productf

https://stackoverflow.com/questions/46878638/how-to-clear-react-native-cache

https://github.com/invertase/react-native-firebase/issues/3254

https://github.com/facebook/react-native/issues/11228