Here is the situation that I am having: I am trying to figure out job retention for job placements in a welfare-to-work training program. In a nutshell here is what is going on:
Each job (which requires a begin date) has a number of milestones (30 days, 90 days, 180 days, 1 year), and these are stored in the db as numerical values corresponding to “unknown”, “employed”, unemployed”. Each milestone is initialized as “unknown” (i.e., pending). As the employment milestones come due, the case manager clicks the appropriate radio button on a form. If the individual is no longer employed, an end date is entered for the job. There can also be “continuous” employment between jobs, if a second job is started within 10 days of the time the previous job ended.”
I then need to do a calculation that divides the number of people who achieved a milestone by the number who possibly could have achieved it.
The achieved portion needs to account for any “employed” milestones (across jobs). If I have an ending date, I can assume that all milestones before that point are “employed”.
The possible portion needs to look at when the job began, and then see if enough days have elapsed that it could possibly be achieved. If for example someone began a job 200 days ago, but does not have an updated 180 day milestone, then the 30 and 90 day milestones are “possible” but the 180 isn’t.
The problem comes in the fact that for each person I need to look at a set of jobs. Twice. My solution involves opening an ADO recordset for both the Achieved and Possible calculations, calculating the # of days achieved/possible, and then doing IIF statements in the query for 30, 90, 180, and 365 days. For each person. And surprise, it is cripplingly slow. If the number of clients I am looking at becomes to large, I get an overflow error.
Can anyone help me reconceptualize what I’m doing. I’ve attached the functions I use for the achieved/possible calculations. I hope this is clear. I never know if I am exaplaining too little or too much.
best,
Karl