Are you one of those who ended up with this nice banner every time you use a Microsoft Office product on an iOS/iPad device?
The Lenovo ThinkBook 14 G3 ACL is a well equipped device with a Ryzen 7 CPU and decent built-quality. The keyboard has a backlight feature, but not the quality and key drop a typical ThinkPad keyboard has – and the typical ThinkPad Trackpoint is missing 🙁 Installing Linux (Kubuntu 22.04, Kernel 5.15) works well if there would not be the Realtek RTL8852BE WiFi with integrated Bluetooth.Kubuntu 22.04 on Lenovo ThinkBook 14 G3 ACL weiterlesen
Some apartments and houses sometimes have a terrible placement of light switches. Therefore, drilling and placing new cables do not seem worth it. For example, a different solution is to set up a smart home system to control smart lights.Smart Lightning weiterlesen
Thanks to the FOSS4G speech “FOSS4G – Cloud optimized formats for rasters and vectors explained,” I got first contact with the vector-format “FlatGeobuf” – and surprise also QGIS supports it 🙂 . That was an obvious starting point for testing it with some vector data and a poor network connection (~16 MBit).
FlatGeobuf – vector performance for the cloud (tested with QGIS 3.24) weiterlesen
Since the end of october 2021 the meteorological service of Austria (ZAMG) provides datasets for free on the “ZAMG Data Hub“.
In addition to typical meteorological data from weather-stations, also a category with “spatial data” is available – so let’s have a look on the data with QGIS and the NetCDF datasets with timestamps.
Altough the HP Elitebook 745 G2 (AMD Hardware) has some age, it’s a nice working-tool with good built-quality and mine works fine after 6 years intense use. BUT: HP reports the most recent BIOS versioned 1.48 – the BIOS internal update tool reports no update available based on V 1.44
Just for fun I gave Fedora and Gnome with version 34 a try again 🙂 One of the first things to do as a geoscientist has been the installation of QGIS… because of the not always up-to-date repo versions (and COPR), I selected the Flatpak-version… but got 3.16 LTS alltough I expected it to be 3.18.2 :-/ What I did not know, Flathub encapsulates 2 versions in one “Repo”.
MapReduce represents a pattern that had a huge impact on the data analysis and big data community. Apache Hadoop allows to scatter and scale data processing with the number of nodes and cores.
One of the many corner points in this full framework is that code is shipped and executed on-site where the data resides. Next, only a pre-processed transformed version (map) of the data is then shuffled and sorted to the aggregators on different executors via the network.
MapReduce is hard to use on its own, so it usually is deployed with
Apache Hadoop or Apache Spark. To play around with it without either one of those large frameworks, I created one in Python – MapReduceSlim. It emulates all core features of the MapReduce. It has one difference, it loads each line of the files separately into the map function. In the case of Apache Hadoop, it would be block-wise. This provides a nice solution to understand the behavior and the pattern of MapReduce and how to implement a mapper and reducer.
Classic WordCount Example
# Hint: in MapReduce with Hadoop Streaming the # input comes from standard input STDIN def wc_mapper(key: str, values: str): # remove leading and trailing whitespaces line = values.strip() # split the line into words words = line.split() for word in words: # write the results to standard # output STDOUT yield word, 1
def wc_reducer(key: str, values: list): current_count = 0 word = key for value in values: current_count += value yield word, current_count
Finally, call the function with the MapReduceSlim framework
# Import the slim framework from map_reduce_slim import MapReduceSlim, wc_mapper, wc_reduce ### One input file version # Read the content from one file and use the # content as input for the run. MapReduceSlim('davinci.txt', 'davinci_wc_result_one_file.txt', wc_mapper, wc_reducer) ### Directory input version # Read all files in the given directory and # use the content as input for the run. MapReduceSlim('davinci_split', 'davinci_wc_result_multiple_file.txt', wc_mapper, wc_reducer)
Further information @ Github: https://github.com/2er0/MapReduceSlim