This is the code from https://keras.io/examples/vision/image_classification_from_scratch/ import tensorflow as tf from tensorflow import keras from tensorflo
I am building a multi-class Vision Transformer Network. When passing my values through my loss function, it always returns zero. My output layer consisits of 37
I am using easyocr methods to recognize the text on the license plate but the results are not good. I have developed deep learning model which detects license p
I'm trying to train a UNet, but for some reason I get the following error: Traceback (most recent call last): File "<ipython-input-54-b56497e81356>", l
How to apply the initializer to the tf.Variable function? Am I on the right track? def initialize_parameters(): initializer
I have a keras model with 5 outputs. My labels include 5 values to compare these to, but also 25 additional values representing a correlation matrix for the 5 v
I designed a CNN for a multitask classification in keras, where I have one input and two different class of classes in output. I compiled the model in this way
I designed a CNN for a multitask classification in keras, where I have one input and two different class of classes in output. I compiled the model in this way
I have a task for my project paper and I do not get how to train the model. This model is supposed to take an image and segment it into different classes. The h
I'm doing binary segmentation using UNET. My dataset is composed of images and masks. I divided the images and masks into different folders ( train_images, trai
NVIDIA GeForce RTX 3070 with CUDA capability sm_86 is not compatible with the current PyTorch installation. The current PyTorch install supports CUDA capabilit
I'm trying to use cppflow library in windows 10 x64 machine in VS2019 C++. I want to inference my model for batch of images (vector <cv::Mat> ). I write a
I'm trying to reconstruct in Python the Gradient Transformation Network model in the paper titled : Single Image Super-Resolution Based on Deep Learning and Gra
I am working on a simple neural network in Keras with Tensorflow. There is a significant jump in loss value from the last mini-batch of epoch L-1 to the first m
I'm training a Conv-VAE for MRI brain images (2D slices). the output of the model is sigmoid, and the loss function binary cross-entropy: x = input, x_hat = out
I'm using Onnxruntime in NodeJS to execute onnx converted models in cpu backend to run inference. According to the docs, the optional parameters are the followi
I tried to implement the most simple Deep Q Learning algorithm. I think, I've implemented it right and know that Deep Q Learning struggles with divergences but
This is probably going to be a stupid question but I am new to deep learning and TensorFlow. Here I have converted my deep learning model to TF-lite, after that
I am training a convolutional neural network for binary time series classification. The training accuracy on both models is very different. If on the first it g
enter image description here model = Sequential() model.add(LSTM(units=32, return_sequences=True, input_shape=(training.shape[1],1))) model.add(Dropout(0.2)) mo