It set me back now that I'm at the end of it, but in the middle of the course it wasn't reflecting my evolution at all. A much needed improvement!
Update on September 19th
Further... away! I've submitted my course assessment already, but that bar won't go green. Oh, well.
Last update on September 23rd
By now I have received my certificate, but the progress bar was still far from filling all the way up. What I expected to be a bug, ended up turning into a series of progress bar iterations I managed to capture.
I grabbed that pipeline API and implemented a denormalizing pipeline. It'd receive data from each table from a relational database and denormalize the data to Appengine's Datastore (non relational). What the pipeline would help with is wait for the missing table so it could do the joins to complete the denormalization. It worked, but datastore writes were soaring, quickly making my app hit the daily budget.
I decided to run some tests. Yes, I should've ran them before coding the whole thing.
The first call to the run method writes 32 times to the datastore. Summing up all the writes and we have a total of 104. Each call to the run method that has a child writes around 30 times to DS, and the last one without a child writes just 8 times.
Now writes get down to business: 108 writes on the generator pipeline, 8 on the other calls to run without a child. 162 total on the rest, summing it all up to 270. Ouch.
By checking the RPC on that pricey one here's what I see:
98 writes on one put. Now what's in there? I clicked on evaluate and found a dp.put(entities_to_put), and on entities_to_put there are _SlotRecord entities, _PipelineRecord entities and _BarrierRecord entities, the whole pipeline pack.
This is what was letting loose the writing frenzy. I had set up a Generator, that would start some other generator and then they'd all feed data back up the pipeline.
Well. The pipeline API does an amazing job for mapreduce, but clearly not usefull at all for what I had in mind here. My plan ended up being overkill. I was a little deceived by the really simple samples at the getting started guide. They just show how simple it is to actually set one up, as you see in the gists above, they truely are. Denormalizing at the source is the way to go in this case. Data comes in denormalized and Datastore saves it, done.
Android 2.3.4 on modem version I9100UHKG4 sucked severely on my SGS2. My data connection would get stuck uploading data without ever getting data downloaded back. The upload arrow was the only one lighting up. I needed to turn data connection OFF and then back ON to get internet working again.
I upgraded android to 2.3.6 and it came with modem version UHKE2. It worked like a charm, 100%, not a single time I had the problem I was constantly having before.
Two days ago I upgraded the OS to the Polish version of the 4.0.3 (ICS) and now I'm getting problems again. I'm getting "no service" after a whole day so I'll downgrade just the modem to the previous version. The current one is XXLPQ.
Minha Time Machine 2 tem duas portas HDMI atrás e uma porta lateral. Primeiro a porta HDMI onde estava conectado o decodificador da NET parou de funcionar. Troquei, coloquei na segunda porta traseira onde estava ligado o PS3. Essa também pifou. Comprei um cabo HDMI novo e conectei na última porta que me restou: a lateral. Essa também se foi.
O decodificador da NET destruiu todas as portas num intervalo de mais ou menos um mês cada uma. O técnico veio até minha casa e disse que o problema só podia ser na rede elétrica porque o decodificador tem proteção contra isso. Deixou outro aparelho e agora só tenho portas component disponíveis.
Não achei relatos semelhantes online então deixo aqui o meu.