'How to find the mean and the variance in assembly from an unsigned integer into a floating-point number/decimal
Here is the code that I have below:
INCLUDE Irvine32.inc
.386
.model flat, stdcall
.stack 4096
ExitProcess proto, dwExitCode:dword
PWORD TYPEDEF PTR WORD
.data
RAW WORD 10, 12 , 8, 17, 9, 22, 18, 11, 23, 7, 30, 22, 19, 60, 71
ptr1 PWORD RAW
TEXT1 BYTE "Mean: ", 0
TEXT2 BYTE "Variance: ", 0
TOTAL WORD LENGTHOF RAW
SUM DWORD 0
MEAN DWORD 0
VARSUM DWORD 0
.code
main proc
mov eax, 0
mov ecx, LENGTHOF RAW
mov esi, OFFSET RAW
SumLoop:
mov esi,ptr1
add eax, [esi]
add esi, TYPE RAW
loop SumLoop
; Taking the avg of the numbers
mov SUM, eax
cdq
mov ebx, 15
div ebx
mov MEAN, eax
invoke ExitProcess, 0
main endp
end main
The question I have is that I am trying to get the mean and the variance from an unsigned integer into a floating-point number/decimal(Format: Mean:XX and Variance:YY). I'm not sure how I'm supposed to do this.
Solution 1:[1]
Integer division will give you an integer quotient and integer remainder.
If you want a mean with decimal places, convert the sum and count to floating point before dividing — then use floating point division to get a floating point answer.
For example, in C:
float answer = (float) sum / (float) 15;
A compiler can show you the instructions to use. (You may have to pick a different compiler and/or compiler options for your environment.)
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|---|
| Solution 1 |
