I have a library with typeclasses that I am migrating to Scala 3 using shapeless-3. One of my typeclasses is: trait Parser[T] { def parse(ctx: Context): (Opt
Regarding to Gatling SBT execute a specific simulation topic is there any way to pass argument to simulation? I've been trying passing command from any CLI like
Environment: flink1.14.4 standalone application mode in kubernetes according to official steps: flink cluster: https://nightlies.apache.org/flink/flink-docs-rel
I was able to create docker based bitnami stand alone spark instance and run spark jobs on it. However I'm not able not able to write data to snowflake from the
I have a DataFrame that consists of Column that is ArrayType, and the array may have a different length in each row of the data. I have provide some example cod
I have a Spark scala DataFrame with two columns, text and subtext, where subtext is guaranteed to occur somewhere within text. How would I calculate the positio
case class Student(id:String, name:String, teacher:String ) val myList = List( Student("1","Ramesh","Isabela"), Student("2","Elena","Mark"),Student(
Below is the sample code snippet that is used for data fetch from HBase. This worked fine with Spark 3.1.2. However after upgrading to Spark 3.2.1, it is not wo
In Scala, I am trying to count the files from an Hdfs directory. I tryed to get a list of the files with val files = fs.listFiles(path, false) and make a count
Hi I am trying to run this code in but it is working fine in another EC2 Azkaban instance but not giving below error for another instance. private val adminprop
Leveraging the best from SnakeYAML & Jackson in scala, I am using the following method to parse YAML files. This method supports the usage of anchors in YAM
I want to understand how to go about implementing the following use-case using typeclasses in Scala (or find out if it is even possible). Given a sealed trait a
I have a query that upserts the data to the database via Slick. I'd like to return the ids of the entities that were inserted. How can I do this using Slick in
val spark = SparkSession.builder().appName("Spark SQL basic example").config("spark.master", "local").getOrCreate() import spark.implicits._ case class Someth
I have the following code, which is used to (sha) hash columns in a spark dataframe: import org.apache.spark.sql.DataFrame import org.apache.spark.sql.functions
I am using Jupyter notebook for running Spark. My problem arises when I am trying to register a UDF from my custom imported jar. This is how I create th UDF in
In my project, I have the current workflow: Kafka message => Spark Streaming/processing => Insert/Update to HBase and/or Phoenix Both the Insert and Updat
I have a GenericRecord stream with value deserialised using Avro, schema has name and age. KafkaSource<GenericRecord> source = KafkaSource.<GenericRec
My project structure is: logs - data - pubs - invent.proto - common - num.proto NOTE - The .proto files are not under src/main/protobu
I am evaluating different load testing tools. After trying JMeter and having two exceptions when running and viewing the test result, I would like to give Gatli