Streamlit is a nice tool to turn data into viewable web apps rapidly.
Streamlit executes a single Python file and performs reloads and reruns of the Python file on change.
MapReduce represents a pattern that had a huge impact on the data analysis and big data community. Apache Hadoop allows to scatter and scale data processing with the number of nodes and cores.
One of the many corner points in this full framework is that code is shipped and executed on-site where the data resides. Next, only a pre-processed transformed version (map) of the data is then shuffled and sorted to the aggregators on different executors via the network.
MapReduce is hard to use on its own, so it usually is deployed with
Apache Hadoop or Apache Spark. To play around with it without either one of those large frameworks, I created one in Python – MapReduceSlim. It emulates all core features of the MapReduce. It has one difference, it loads each line of the files separately into the map function. In the case of Apache Hadoop, it would be block-wise. This provides a nice solution to understand the behavior and the pattern of MapReduce and how to implement a mapper and reducer.
# Hint: in MapReduce with Hadoop Streaming the # input comes from standard input STDIN def wc_mapper(key: str, values: str): # remove leading and trailing whitespaces line = values.strip() # split the line into words words = line.split() for word in words: # write the results to standard # output STDOUT yield word, 1
def wc_reducer(key: str, values: list): current_count = 0 word = key for value in values: current_count += value yield word, current_count
Finally, call the function with the MapReduceSlim framework
# Import the slim framework from map_reduce_slim import MapReduceSlim, wc_mapper, wc_reduce ### One input file version # Read the content from one file and use the # content as input for the run. MapReduceSlim('davinci.txt', 'davinci_wc_result_one_file.txt', wc_mapper, wc_reducer) ### Directory input version # Read all files in the given directory and # use the content as input for the run. MapReduceSlim('davinci_split', 'davinci_wc_result_multiple_file.txt', wc_mapper, wc_reducer)
Further information @ Github: https://github.com/2er0/MapReduceSlim
Data science and Jupyter notebook can sometimes get exhausting. What about debugging, version control, code reviewing and so on. Coming from a Software Engineering background it‘s like losing 50% of the stuff you were used to.
To mitigate those problems I recently partially switched from Python to R with many improvements. For local Python coding, JetBrains PyCharm is my tool of choice and Jupyter notebooks for remote coding. With R it is RStudio Desktop and for remote, there is RStudio Server, which is almost like the desktop version within a browser. This allows one to develop and analyze data from any device with a browser.
If you want to install Arch, everyone tells you that you should read the installation guide. The second thing you may hear is that you should read the installation guide and that you have to follow it step by step. That also has a short name RTFM – Read The Fucking Manual – and stick to it – no joke.
Make backups before installing Arch Linux. 😉
For four years now, I used Manjaro as my main GNU/Linux distribution for my daily use. That includes developing with Java/C++/Python and data analysis stuff with R/Python.
Now for me, it was time to switch from Manjaro to another distribution. Sidenote: Manjaro uses Arch Linux as base distribution but provides a considerable amount of additional services out
of the box. Manjaro was running fine for four years now with only one incident, with the integrated WWAN modem.
Since I started to use Manjaro, I fell in love with the „rolling release“ feature with an up-to-date kernel and all the up-to-date packages. I decided that it is time to switch to plain Arch Linux for me now.
Wer sich wundert warum im Umfeld von Behörden & Verwaltungen soviel proprietäre Software in Verwendung ist, für all diese gibt es seit Montag (19.02.2018) eine sehr gut recherchierte Dokumentation des ARD & c’t dazu.
Abseits der Argumente des „Vendor-Locks“, Sicherheitsbedenken (Einsatz von US closed source Software im Militär- und Polizeibereich) etc., sollte man nicht die Möglichkeit des Aufbaus einer europäischen Softwareindustrie vernachlässigen und damit von zukunftssicheren Arbeitsplätzen – anstatt das Steuergeld quasi übern großen Teich zu überweisen, eben in den Aufbau einer europ. Softwareindustrie investieren – auf Basis offener Software und offener Standards. Liebe Politik und öffentliche Verwaltung: Statt auf Lobbying reinzufallen, besser Zukunftschancen sehen ! 🙂
Doku-Verfügbar bis 19.05.2018
An entirely new Firefox – why reinvent the wheel again?
Why ? Performance, new clean codebase, better support for new technologies, other philosophy, …
It’s has been a long time since I had time for some useful and useless stuff. So we (isticktoit) found some useless stuff on Heise open: A Linux Retro-Gaming distribution and thought about bringing some old stuff up to ‚waste‘ some hours.
In this case I tried the new release of the Lakka distribution, which is mostly for Retro/Emulator-Gaming. It contains a lot of emulators from Atari up to PlayStation and Nintendo.