Executed 20000 steps
Weights: [1.71574017682, 13.254363518, -0.251725402944, 0.748483584198, 4.88919616804, 2.53787967704]
Validating training
Input: [860, 0.69, 7.8, 3, 0] actual: 1, predict: 1
Input: [971, 0.73, 9.2, 2, 0] actual: 1, predict: 1
Input: [870, 0.74, 7.3, 1, 0] actual: 1, predict: 1
Input: [622, 0.82, 7.2, 1, 0] actual: 0, predict: 0
Input: [561, 0.65, 9.5, 1, 0] actual: 0, predict: 0
Input: [481, 0.73, 6.5, 1, 1] actual: 0, predict: 0
Input: [862, 0.72, 9.7, 1, 2] actual: 1, predict: 1
Correctness = 100%

You’ll see from the output that the program rolled through the data 20,000 times looking for the best fit. It found the best fit when it weighted the elements above with the resp. values shown next to 'Weights'. It then validated the model it came up with: In the last 2 columns you’ll see actual (result fed to the program) & predicted (result from the model that the program generated) with a 0 for not qualify and a 1 for qualify. We want these to line up. If you compare predicted v actual you’ll see that when it applied those weights to the formula and attempted to predict the results from the training sets it came up with 100% 'correctness'. Not bad!

But the real value of this modelling process lies in the prediction of future results! Below, you can plug in your own variables and the program will apply what it’s ‘learned’ from the test data to assess your likelihood to qualify (with a tip of the cap to the Magic 8 Ball in the process :-)

Testing the model
Will I qualify?
Annual Hrs:
Intensity Factor:
Hours of Sleep:
Number of Races:
Location:
Input: [, , , , ] predict: 0

Outlook not so good.