OSGeo Planet

CARTO Inside Blog: Visualizing the spread of current fires with CARTO VL

OSGeo Planet - Thu, 2018-08-16 10:00

Using synced data streams and CARTO VL, this map shows active fire perimeters and their growth since the beginning of July.

The initial goal of the map was to animate the growth of fires over time. Through the design and development process, other patterns began to emerge like the speed at which some fires are growing compared to others, and the spread pattern of fires over different geographic areas.

Read on to learn more about how we are leveraging synced data feeds, powerful visualization capabilities, and dynamic UI elements to power this near real-time fire map.

Link to live map


The fire polygons displaying on the map are a derivative of two datasets from GeoMAC Wildland Fire Support: active fire perimeters and all (active and inactive) perimeters for the 2018 fire year. In order to display only active fires, the two fire datasets are intersected where any perimeter tagged as “active” in the first dataset grabs only the polygons within its boundary from the all 2018 perimeter dataset. Once a fire perimeter is declared inactive, it no longer displays on the map.

In the background, smoke patterns produced from the fires are displayed using smoke perimeter data from NOAA’s Fire Products.

Each dataset syncs to the source every hour.

This means any time the data are updated, so is the map.

Cool, right?

Keep reading for more… :)

Animation and Visualization

There are so many visualization capabilities inside of CARTO VL and lately, one of my favorites to experiment with is animation.

In the active fires map, there are two different polygon animations happening.

First, is the visualization of how fires grew over time inside of their current perimeter. The polygons draw in according to the date the perimeter was collected (datecrnt) and each is colored based on its reported acreage (gisacres). The animation begins on July 1, 2018 and plays through until it reaches the polygon with the most recent date in the table. The animation cycles through all of the data in 30 seconds. The polygons have no fade-in effect, but fade-out based on reported acres. This means that larger polygons stay on the map longer than smaller ones and given that most fires spread from a small area to a larger one, this provides a way to gradually fade out where a fire began and bring the larger, consecutive polygons, to the foreground.

filter: animation(linear($datecrnt,time('2018-07-01T00:00:00Z'),globalMAX($datecrnt)),30,fade(0,$gisacres))

Each polygon is colored according to its reported acreage using one of our CARTOColor schemes (OrYel), where yellow is assigned to smaller areas through orange to red for the largest. The opacity of a polygon blends between 0.1 and 0.6 based on its area.

color: opacity(ramp(linear($gisacres,viewportMIN($gisacres),viewportMAX($gisacres)),oryel),blend(0.1,0.6,linear($gisacres)))

Combining these animation and visualization techniques gives the visual impact of a growing and burning fire:

Animation and Visualization

The second animation, of smoke plumes, is a more subtle one and so is the style. Unlike the fires, the smoke is not animating over time (since NOAA serves a new dataset each day). Given that smoke has a natural, flow-like movement, I wanted an effect where on any given day, we could see which fires are generating the most smoke and get a sense for how the plumes are traveling. This is a fun technique to experiment with.

Smoke Animation Viewport Based Styling

Another favorite CARTO VL feature of mine is the ability to do viewport-based styling versus globally over an entire dataset.

You might have noticed that in the color styling above, we are ramping linearly across the viewportMIN and the viewportMAX.

With viewport-based styling we get a better understanding of fires at both the national and more localized levels.

In the image below, when the map is zoomed out to the western United States, the fires that really pop out are the Mendocino Complex and Carr fires in Northern California. We can see there are many other fires burning in different states, but since those are two of the largest in the current view, and polygons are colored according to their size, the other, comparably smaller fires are colored more along the yellow to orange range.

Viewport zoomed out

If we zoom into the series of fires in southern Oregon, removing both Mendocino and Carr from the view, the symbology dynamically updates to take into account only the fires in the current viewport. In this case we can see that the Klondike and Taylor Creek fires are the largest burning fires in this area.

Viewport zoomed in Legend and Interactivity

Of course no map is complete without a legend and hover or pop-up components.

Our Head of Design Emilio created an informative yet unobtrusive legend that hierarchically presents the different levels of information. The dynamic time stamp that cycles through the animation and the color scale legend are key to understanding the patterns seen on the map.

And since we can’t fit all of the information in a legend or map symbol, Front-End Developer Jesus created hover-based interactivity that dynamically fetches the name and acreage of the current fire polygon.

Hover interactivity Take a look!

You can find this map and associated code here !

We hope this map gets you even more excited about CARTO VL and we can’t wait to see the maps that you make!

Happy mapping!

Categories: OSGeo Planet

CARTO Inside Blog: Beyond the Shapefile with File Geodatabase and GeoPackage

OSGeo Planet - Thu, 2018-08-16 09:30

Anyone who has programmed geospatial software has eventually come to a conclusion about data formats: there is only one truly de facto standard for geospatial data, the shape file, and the shape file sucks.

  • It requires at least three files to define one spatial layer (more if you want to specify coordinate reference system, or character encoding, or spatial indexing).
  • It only supports column names of 10 or fewer characters.
  • It lacks a time or timestamp data type.
  • It is limited to 2GB in file size.
  • It only supports homogeneous spatial types for each layer.
  • It only supports text fields of up to 255 characters.

Almost since they invented it, Esri has been trying to come up with replacements.

The “personal geodatabase” used the Microsoft Access MDB format as a storage layer, and stuffed geospatial information into that. Sadly, the format inherited all the limitations of MDB, which were substantial: file size, bloat, occasional corruption, and of course Windows-only platform dependencies.

File Geodatabase

The “file geodatabase” (FGDB) got around the MDB limitations by using a storage engine Esri wrote themselves. Unfortunately, the format was complex enough that they could never fully specify it and release a document on the format (and never seemed to want to). As a result, official support has only been through a proprietary binary API.

The FGDB format is very close to a shape file replacement.

  • It supports multiple layers in one directory.
  • It has no size limitations.
  • It has a rich set of data types.
  • It includes useful metadata about coordinate reference system and character encoding.

Since it shipped with ArcGIS 10 in 2010, the FGDB format has become popular in the Esri ecosystem, and it’s not uncommon to find FGDB files on government open data sites, or to receive them from GIS practitioners when requesting data.

CARTO has supported the shape file format since day 1, but we only recently added support for FGDB. We were able to support FGDB because the GDAL library we use in our import process has an open source read-only FGDB driver. Using the “open FGDB” driver allows us to use a stock build of GDAL without incorporating the proprietary Esri API libraries.

The file geodatabase format is a collection of files inside a directory named with a .gdb extension. In order to transfer that structure around, FGDB files are first zipped up. So, any FGDB data you receive will be a zip file that unzips to a .gdb directory.

FGDB data are loaded to CARTO just like any other format.

  • Use the “New Dataset” option and either browse to your FGDB .zip file or drag’n’drop it in.
  • Or, just drag’n’drop the .zip file directly into the datasets dashboard.

After loading, you will have one new dataset in your account for each layer in the FGDB, named using a datasetname_layername pattern.

For example, the Elections.zip file from Clark County, Nevada, includes 11 layers, as we can see by looking at the ogrinfo output for the file.

INFO: Open of `Election.gdb' using driver `OpenFileGDB' successful. 1: senate_p (Multi Polygon) 2: school_p (Multi Polygon) 3: regent_p (Multi Polygon) 4: precinct_p (Multi Polygon) 5: ward_p (Multi Polygon) 6: congress_p (Multi Polygon) 7: pollpnts_x (Point) 8: educat_p (Multi Polygon) 9: township_p (Multi Polygon) 10: commiss_p (Multi Polygon) 11: assembly_p (Multi Polygon)

After upload, the file has been convered to 11 datasets with the standard naming pattern.

Multiple FGDB Layers


If the FGDB format is so much better than shape files, why doesn’t the story end there?

Because FGDB still has a couple major problems:

  • There is no open source way to write to an FGDB file: that requires the proprietary Esri API libraries.
  • The FGDB format is a directory, which makes shipping it around involve annoying extra zip/unzip steps each time.
  • The FGDB format is closed, so there is no way to extend it for special use cases.

A couple years after FGDB was released, the Open Geospatial Consortium (OGC) took on the task of defining a “shape file replacement” format, that learned all the lessons of shape files, personal geodatabases, and file geodatabases.

  • Use open souce SQLite as the storage engine, more reliable and platform independent than MDB, but with the advantage of easy, language independent, read/write access via SQL.
    • The SQLite engine is open source and multi-platform, so no Windows dependency.
    • The SQLite engine stores data in a single file, so no need to zip/unzip all the time.
  • Leverage existing OGC standards like the WKT standard for spatial reference systems, and the WKB standard for binary geometry representation.
  • Document the format and include an extension mechanism so it can evolve over time and so third parties can experiment with new extensions.

The result is the GeoPackage (GPKG) format, which has become widely used in the open source world, and increasingly throughout the geospatial software ecosystem.

Loading GeoPackage into Carto now works exactly the same as FGDB: use the “New Dataset” page, or just drag the file into the dataset dashboard. All the layers will be imported, using the filename_layername pattern.

You can also now use GeoPackage as an export format! Click the export button and select the GPKG format, and you’ll get a single-layer GeoPackage with your table inside, ready for sharing with the world.

Download GPKG Layers

Thanks, GDAL!

All this works because of the wonderful multi-format tools in the GDAL library, which we use as part of our import process. You can exercise the power of GDAL yourself to directly solve your CARTO ETL problems using the ogr2ogr and ogrinfo tools in GDAL, check it out!

Categories: OSGeo Planet

Just van den Broecke: Emit #5 – Assembling and Deploying 5 AirSensEURs – a Story in Pictures

OSGeo Planet - Wed, 2018-08-15 21:01

This is Emit #5, in a series of blog-posts around the Smart Emission Platform, an Open Source software component framework that facilitates the acquisition, processing and (OGC web-API) unlocking of spatiotemporal sensor-data, mainly for Air Quality and other environmental sensor-data like noise.

Summer holidays and a heat-wave strikes The Netherlands. Time for some lighter material mainly told in pictures. As highlighted in Emit #2, I have the honor of doing a project for the European Union Joint Research Centre  (EU JRC), to deploy five AirSensEUR (ASE) boxes within The Netherlands, attaching these to the Smart Emission Platform in cooperation with RIVM (National Institute for Public Health and the Environment). The ASE boxes measure four Air Quality (AQ) indicators: NO2 (Nitrogen Dioxide), NO (Nitrogen Monoxide), O3 (Ozone) and CO (Carbon Monoxide) plus meteo (Temp, Humidity, Air Pressure) and GPS. Read more on ASE in this article.

ASE Architecture

The ASE is an Open Hard/Software platform that can be configured with multiple brands/types of sensors. In the current case all four above mentioned AQ sensors are from AlphaSense. As these are relatively cheap sensors (< $100,-), the challenge is to have these calibrated before final deployment. This calibration is done by placing the ASE boxes first at an RIVM station, gather data for a month or two and then calibrate these sensors from official RIVM reference measurements at the same location. Using both the raw ASE data and the RIVM reference data the calibration “formulae” can be determined, before placing the ASEs at their final deployment locations around The Netherlands and have the Smart Emission Platform assemble/publish the (calibrated) data for the next year or so. More info on AirSensEUR via this Google Search.

Ok, picture time!  Explanatory text is below each picture.

1. ASEs unboxed

Picture 1: Boxes arrived from EU JRC Italy on June 12, 2018. Assembling: upper left shows the (total of 20) AlphaSense sensors like “blisters” (Dutch “doordrukstrips”), the ASE box (with screwdrivers on top) and the protecting metal outer shield on the right.

2. placing AlphaSense sensors in sockets

Picture 2: Very carefully placing the AlphaSense sensors in the ASE Sensor Shield (an Arduino-like board) without touching the top-membrane!

3. All sensors firmly in their sockets

Picture 3: all sensors placed, attach current and next to network and other configuring!

4. Boxes humming and connected via WIFI to the LAN

Picture 4: On default startup (via touch buttons) the ASE will expose a default WIFI access point. This can be used to attach and to login at the “ASE Host Board”, a Raspberry Pi-like board running standard Linux Debian. SSH into each box and further configure e.g. the WIFI settings to become a WIFI client, first having all boxes connect to the local office WLAN.

5. configured for InfluxDB Data Push visualized via Grafana

Picture 5. Each box runs a Data Aggregator and can be configured to push data to a remote InfluxDB database. In our case we have setup a Smart Emission InfluxDB Data Collector where the (raw) data is received. This InfluxDB datastore is visualized using a Grafana Panel shown in the picture. We see the five boxes ASE_NL_01-05 sensing and pushing data!


6. All packed and in trunk of my car

Picture 6. A good start, but next we need to go out and place the boxes at the RIVM station for a period of calibration. So tearing down, packing, all into the trunk of my car. Up to the RIVM station! July 30, 2018, Still 35 degrees C outside.

7. The RIVM sensor station, right near the highway

Picture 7. July 30, 2018, 13:00. Arrived at the RIVM station. Now to figure out how to attach the five boxes. The lower horizontal iron pole seems the best option. Put all soft/hardware knowledge away, now real plumbing is required!

8. Could not have made this without the great help of Jan Vonk (RIVM)

Picture 8. Jan Vonk of RIVM, who also have deployed about 12 ASEs, placing the first boxes on the horizontal pole, so far so good.

9. All five boxes attached!

Picture 9. All five boxes strapped to the pole. Jan Vonk doing the hard work. Next challenge: they need power and WIFI…

10. Connecting to power…

Picture 10. One cannot have enough power sockets.

11. Power supplies covered under plastic box.

Picture 11. Covering all power supply stuff under tightened box shielded from rain.

12. Moment of truth starting up and attaching to local WIFI

Picture 12. July 30, 2018, 17:00. Last challenge: booting up the boxes and have them connecting to the local RIVM station’s WIFI. I had pre-configured WLAN settings in each box, but this is always a moment of truth: will they connect? If they do they will start sampling and push their raw data to the Smart Emission Platform…Then we can start the calibration period. And success.. they connected!

13. All boxes connected and sampling and pushing data.

Picture 13. Now on August 15, 2018, with minor hickups, and with great help from the JRC folks Marco Signorini and Michel Gerboles, we have all five boxes sampling and pushing data for the calibration period. The above plot shows raw NO2 data, to be calibrated.

A next step for the RIVM Program “Together Measuring Air Quality”.

So a good start! The heatwave is over, the next hard work is calibration. Why are we doing this? Well, like with meteorology, RIVM and others are stimulating Air Quality to be measured by basically anyone, from groups of civilians to individuals. For this RIVM has setup the program “Samen meten aan Luchtkwaliteit” (“Together measuring air quality”). Measuring Air Quality is not an easy task. We need to learn by doing, make mistakes, and spread knowledge gained. Both AirSensEUR and Smart Emission are therefore Open. Below some further links:

Smart Emission: GitHub, WebSite, Documentation, and Docker Images.

Categories: OSGeo Planet

Jackie Ng: A short MapGuide poll

OSGeo Planet - Wed, 2018-08-15 12:21
If you are a MapGuide user/developer reading this post, I have a short survey I'd like you to take part in.

I'll leave submissions open until the end of this month which should be enough time to gather a good representative opinion on this matter.
Categories: OSGeo Planet

CARTO Inside Blog: Bulk CARTO Import Using COPY

OSGeo Planet - Tue, 2018-08-14 16:50

There are only three certainties in life: death, taxes, and the constant growth in data sizes.

To deal with the latter, we have introduced a new mode to our SQL API: copy mode.

The new /sql/copyfrom and /sql/copyto end points are direct pipes to the underlying PostgreSQL COPY command, allowing very fast bulk table loading and unloading.

With the right use of the HTTP protocol, data can stream directly from a file on your local disk to the CARTO cloud database.

What’s The Difference? Import API

When you import a file using the dashboard, or the Import API, we first pull a copy up to the cloud, so we make one copy.

Then, we analyze your file a bit. If it’s a text file, we’ll try and figure out what columns might be geometry. Once we’re pretty sure what it is, we’ll run an OGR conversion process to bring it into the CARTO database. So we’ve made another copy (and we get rid of the staging copy).

Once it is in the database, we still aren’t quite done yet! We need to make sure the table has all the columns the rest of the CARTO platform expects, which usually involves making one final copy of the table, and removing the intermediate copy.

Import API Process

That’s a lot of copying and analysis!

On the upside, you can throw almost any old delimited text file at the Import API and it will make a good faith effort to ensure that at the end of the process you’ll have something you can put on a map.

The downside is all the analyzing and staging and copying takes time. So there’s an upper limit to the file size you can import, and the waiting time can be long.

Also, you can only import a full table, there’s no way to append data to an existing table, so for some use cases the Import API is a poor fit.


In order to achieve a “no copy” stream from your file to the CARTO database, we make use of the HTTP chunked transfer encoding to send the body of a POST message in multiple parts. We will also accept non-chunked POST messages, but for streaming large files, using chunked encoding lowers the load on our servers and speeds up the process. Ambitious clients can even use a compressed encoding for more efficient use of bandwidth.

SQL API Copy Process

At our SQL API web service, we accept the HTTP POST payload chunks and stream them directly into the database as a PostgreSQL COPY, using the handy node-pg-query-stream module.

The upside is an upload that can be ten or more times faster than using the Import API, and supports appending to existing tables. You also have full control of the upload process, to tweak to your exact specifications.

The downside is… that you have full control of the upload process. All the work the Import API usually does is now delegated to you:

  • You will have to create your target table manually, using a CREATE TABLE call via the SQL API before running your COPY upload.
  • If your upload file doesn’t have a geometry column, you’ll have to compose one on your side for optimum performance.

    • You can use a post-upload SQL command to, for example, generate a point geometry from a latitude and longitude column, but that will re-write the whole table, which is precisely what we’re trying to avoid.
  • You will have to run CDB_CartodbfyTable() yourself manually to register your uploaded table with the dashboard so you can see it. For maximum speed, you’ll want to ensure your table already contains the required CARTO columns or the “cartodbfy” process will force a table rewrite to fill them in for you.
For Example…

Suppose you had a simple CSV file like this:

the_geom,name,age SRID=4326;POINT(-126 54),North West,89 SRID=4326;POINT(-96 34),South East,99 SRID=4326;POINT(-6 -25),Souther Easter,124

You would create a table using this DDL:

CREATE TABLE upload_example ( the_geom geometry, name text, age integer );

Then “cartdbfy” the table so it was visible in the dashboard:

SELECT CDB_CartodbfyTable('upload_example');

And finally, upload the file:

COPY upload_example (the_geom, name, age) FROM STDIN WITH (FORMAT csv, HEADER true)

A copy call consists of two parts: an invocation of the COPY SQL command to specify the target table and format of the input file; and, the file payload itself.

For example, this shell script pipes a CSV file through a compressor and then to a streamed curl POST upload, so the data moves directly from the file to CARTO.

#!/bin/bash user=<username> key=<apikey> filename=upload_example.csv sql="COPY+upload_example+(the_geom,name,age)+FROM+STDIN+WITH+(FORMAT+csv,HEADER+true)" cat $filename \ | gzip -1 \ | curl \ -X POST \ -H "Content-Encoding: gzip" \ -H "Transfer-Encoding: chunked" \ -H "Content-Type: application/octet-stream" \ --data-binary @- \ "http://${user}.carto.com/api/v2/sql/copyfrom?api_key=${key}&q=${sql}"

Note that the COPY command specifies the format of the incoming file, so the database knows where to route the various columns:

  • The tablename (column1, column2, column3) portion tells the system what database column to route each column of the file to. In this way you can load files that have fewer columns than the target table.
  • FORMAT CSV tells the system that the format is a delimited one (with comma as the default delimiter).
  • HEADER TRUE tells the system that the first line is a header, not data, so it should be ignored. The system will not use the header to route columns from the file to table.

Also note that for upload compression, on a fast network, a light compression (see the -1 flag on the gzip command) works best, because it balances the performance improvement of smaller size with the cost of decompressing the payload at the server, for the fastest overall speed.

Next Steps
  • If you’re interested in using the SQL API COPY infrastructure for your uploads or ETL, start with the SQL API documention for COPY. There are some basic examples for streaming data with Python and using curl for uploads and downloads.
  • You can do lots of basic ETL to and from CARTO using the ogr2ogr utility. The next release will include support for COPY, for a 2x speed-up, but even without it, the utility offers a convenient way to upload data and append to existing tables without going through the Import API.
Categories: OSGeo Planet

Blog 2 Engenheiros: Como substituir um item na Tabela de Atributos no ArcGIS e QGIS?

OSGeo Planet - Tue, 2018-08-14 06:49

Todo mapa precisa de retoques.

Seja porque a sua primeira tentativa deixou ele desagradável, seja porque alguém pediu para modificar algum dado. Normalmente, é o segundo.

Em algumas situações, essas alterações são rápidas, basta modificar uma cor aqui, outra lá, trocar as configurações da escala e esta tudo pronto.

Entretanto, nem todas as solicitações são assim fáceis. Algumas precisam modificar todo o nosso banco de dados, ou seja, toda a nossa Tabela de Atributos.

Imagine a seguinte situação, a qual exploraremos neste tutorial, você fez o mapa de uso e ocupação do solo de uma determinada área de estudo e dividiu os usos em:

  • Pastagem;
  • Área Urbana;
  • Reflorestamento;
  • Agricultura;
  • Vegetação Secundária Estágio Inicial;
  • Vegetação Secundária Estágio Médio; e
  • Vegetação Secundária Estágio Avançado.

Até aqui, tudo bem.

Mas vamos supor que o empreendedor quer modificar o mapa e, ao invés de utilizar os termos Pastagem, Área Urbana, Reflorestamento e Agricultura, ele queira utilizar “Áreas Antropizadas”.

Se você tem poucos polígonos no seu shapefile de uso do solo, este processo pode ser rápido, mas e se você tiver mais de 100? 1.000? Vai fazer um por um? Não.

Acompanhe a nossa postagem e descubra como utilizar a calculadora de campo (“Field Calculator”) para resolver este problema no ArcGIS e QGIS.

Preparando um Arquivo para o Tutorial

No ArcGIS, a criação de um shapefile pode ser realizada pelo ArcToolbox. Nele, procure pela ferramenta Create Feature Class, a qual encontra-se dentro de Data Management Tools > Feature Class.

Nela, você irá fornecer dados como localização do shape a ser criado (Feature Class Location), nome do arquivo (Feature Class Name) e tipo da geometria (Geometry Type).

Os outros dados são opcionais.

Com o shapefile criado, desenhe alguns polígonos e insira as classes que apresentamos acima na tabela de atributos. A figura abaixo mostra o shapefile que criamos e sua respectiva tabela de atributos.

Shapefile criado para conduzir o tutorial do Blog 2 Engenheiros.Shapefile criado para conduzir o tutorial do Blog 2 Engenheiros.

No QGIS, a criação de shapefiles é realizada clicando no menu Camada, Criar Camada, e em seguida, Nova Camada Shapefile.

Uma janela será aberta e você que terá que preencher dados como tipo da geometria (ponto, linha ou polígono), sistema de coordenada e campos existentes na tabela de atributos.

Ao clicar em OK, o QGIS irá solicitar onde você quer salvar o novo arquivo. A figura abaixo mostra o resultado.

Shapefile criado para conduzir o tutorial do Blog 2 Engenheiros.Shapefile criado para conduzir o tutorial do Blog 2 Engenheiros. Substituição no ArcGIS

Agora, com o nosso shapefile em mãos, iremos realizar a modificação solicitada pelo nosso empreendedor.

Lembre-se que modificações realizadas na tabela de atributos sem ligar o modo de edição são permanentes.

Clique com o botão direito sobre o shapefile e selecione Open Attribute Table (Abrir Tabela de Atributos). Em seguida, iremos criar uma nova coluna, de forma a não perder a informação original que iremos alterar.

Em seguida, vamos acrescentar uma nova coluna, iremos chamar ela de uso_solo2 (a criação de novas colunas se dá clicando em Table Options e Add Field).

Uma nova janela será aberta solicitando o nome da nova coluna e os tipos de dados que serão inseridos. No nosso caso, são dados do tipo Texto.

Quando você criar essa nova coluna, clique sobre ela com o botão direito e selecione a opção Field Calculator. Nela iremos utilizar uma função do tipo python para substituir vários processos com apenas algumas linhas de código.

Antes de escrever o código Python, lembre-se de marcar a opção “Python” (no topo da janela) e “Show Codeblock”, o que habilitará o “Pre-Logic Script Code”, onde iremos inserir o código abaixo.

def TrocarB2E( Valor ): if Valor == "Pastagem" or Valor == "Área Urbana" or Valor == "Reflorestamento" or Valor == "Agricultura": return u"Área Antropizada" else: return Valor

Este código irá receber um determinado Valor, e se ele for igual à Pastagem, ou a Área Urbana, ou à Reflorestamento, ou à Agricultura, irá retornar como resultado o texto Área Antropizada, caso contrário, irá retornar o próprio valor de entrada.

Após preencher este código no campo “Pre-Logic Script Code”, na caixa de texto seguinte, a qual leva o nome da coluna (no nosso caso, é uso_solo2), iremos escrever a nossa função, abrir parênteses e inserir a coluna uso_solo para converter os valores, conforme código abaixo.

TrocarB2E( !uso_solo! )

O resultado, assim como os campos preenchidos pelo Field Calculator, são apresentados na figura a seguir.

Código python e resultados da operação com Field Calculator no ArcGISCódigo python e resultados da operação com Field Calculator no ArcGIS. Substituição no QGIS

No QGIS, com o nosso shapefile já adicionado, vamos acessar a tabela de atributos dele clicando sobre ele com o botão direito e selecionando Abrir Tabela de Atributos (“Open Attribute Table”).

Na tabela de atributos, há um botão com o ícone de um ábaco, clique sobre ele (ou utilize o atalho Ctrl+I). Esse procedimento irá abrir a Calculadora de Campo do QGIS (“Field Calculator”).

Lembre-se de marcar a caixa Criar Novo Campo (“Create New Field”) para que o processo gere uma nova coluna com os novos valores.

Na janela para inserção do código, utilize o código abaixo, onde cada linha representa uma condição e o resultado esperado dela.

CASE WHEN "uso_solo" IS 'Pastagem' THEN 'Área Antropizada' WHEN "uso_solo" IS 'Área Urbana' THEN 'Área Antropizada' WHEN "uso_solo" IS 'Reflorestamento' THEN 'Área Antropizada' WHEN "uso_solo" IS 'Agricultura' THEN 'Área Antropizada' ELSE "uso_solo" END

Após clicar em OK, uma nova coluna será gerada com as novas classes. A figura abaixo apresenta o procedimento e o resultado no QGIS.

Procedimento e resultado do Field Calculator no QGIS.Procedimento e resultado do Field Calculator no QGIS.

Note que no Field Calculator do QGIS, há uma diferenciação do nome das colunas (as quais utilizam aspas duplas) e textos (que utilizam aspas simples).

No código que utilizamos, criamos vários “casos”, sendo que quando (“when”) o valor do campo é igual à um determinado valor, o resultado (“then”) é apresentado logo em seguida.

Lembre-se que esse procedimento também pode ser feito para números inteiros, possibilitando substituir valores de um intervalo por outros.

Caso tenha alguma dúvida, fique a vontade e deixa ela nos comentários que estaremos respondendo assim que possível.

Categories: OSGeo Planet

Paulo van Breugel: Draw a histogram of vector attribute column in GRASS GIS

OSGeo Planet - Mon, 2018-08-13 20:18
GRASS GIS has convenient tools to draw histograms of raster values. As similar tool to draw a histogram of values in a vector attribute table lacks. But you can easily add this functionality by installing the new d.vect.colhist addon by Moritz Lennert. Read this short post on Ecodiv.earth tutorials. Advertisements
Categories: OSGeo Planet

PostGIS Development: PostGIS 2.5.0beta2

OSGeo Planet - Sat, 2018-08-11 00:00

The PostGIS development team is pleased to release PostGIS 2.5.0beta2.

This release is a work in progress. Remaining time will be focused on bug fixes and documentation until PostGIS 2.5.0 release. Although this release will work for PostgreSQL 9.4 and above, to take full advantage of what PostGIS 2.5 offers, you should be running PostgreSQL 11beta3+ and GEOS 3.7.0beta2 which were released recently.

Best served with PostgreSQL 11beta3 which was recently released.

Changes since PostGIS 2.5.0beta1 release are as follows:

  • 4115, Fix a bug that created MVTs with incorrect property values under parallel plans (Raúl Marín).
  • 4120, ST_AsMVTGeom: Clip using tile coordinates (Raúl Marín).
  • 4132, ST_Intersection on Raster now works without throwing TopologyException (Vinícius A.B. Schmidt, Darafei Praliaskouski)
  • 4109, Fix WKT parser accepting and interpreting numbers with multiple dots (Raúl Marín, Paul Ramsey)
  • 4140, Use user-provided CFLAGS in address standardizer and the topology module (Raúl Marín)
  • 4143, Fix backend crash when ST_OffsetCurve fails (Dan Baston)
  • 4145, Speedup MVT column parsing (Raúl Marín)

View all closed tickets for 2.5.0.

After installing the binaries or after running pg_upgrade, make sure to do:


— if you use the other extensions packaged with postgis — make sure to upgrade those as well

ALTER EXTENSION postgis_sfcgal UPDATE; ALTER EXTENSION postgis_topology UPDATE; ALTER EXTENSION postgis_tiger_geocoder UPDATE;

If you use legacy.sql or legacy_minimal.sql, make sure to rerun the version packaged with these releases.


Categories: OSGeo Planet

gvSIG Team: Sobre licitaciones y requisitos exclusivos al software libre

OSGeo Planet - Thu, 2018-08-09 09:38

Leo, y no es la primera vez, en un pliego de prescripciones técnicas que se pide a los proponentes que en el caso de que se le ocurra presentar una solución que se base total o parcialmente en software libre que justifiquen de forma adicional a la propuesta técnica, la estabilidad, robustez y grado de penetración en el mercado de las componentes de software libre. E, igualmente, argumenten que cuentan con el respaldo de una comunidad de usuarios y desarrolladores lo suficientemente amplia para garantizar su evolución y viabilidad a futuro.

Cosa que no me parece mal, todo lo contrario. Pero me pregunto por qué esto sólo se pide si la propuesta es de software libre.

Sería bueno que en aquellos casos que a los proponentes se les pasara por la cabeza proponer software privativo se les exigiera lo mismo. No sólo robustez, estabilidad y mercado, sino también disponer de una comunidad de usuarios y desarrolladores que fuera lo suficientemente amplia para garantizar la evolución y futuro de la tecnología. Y claro, esta comunidad de desarrolladores, debería poder tener acceso al código fuente para cumplir con esto último. Que si no, pasa lo que pasa, que por ejemplo un día tenemos el ArcIMS y al siguiente despertamos con el ArcGIS y nos dejan colgados (o lo que es lo mismo, ¡pasé usted por caja!), que la empresa cambia sus políticas comerciales o, casos hay unos cuantos, que directamente la empresa desaparece o abandona una determinada tecnología.

A lo mejor, es por eso, por lo que no lo piden. Porque si se exige lo mismo al privativo que al libre, no tendrían opción. Lo que ya es cuestionable es la legalidad, por no hablar de la ética, de este tipo de condiciones unilaterales.

Categories: OSGeo Planet

Fernando Quadro: Curso DBA PostGIS – Turma 4

OSGeo Planet - Thu, 2018-08-09 05:37

Prezado leitor,

Você tem interesse em aprender a trabalhar com banco de dados espacial, e possui conhecimentos em algum banco de dados? Então esta é a sua oportunidade!

A GEOCURSOS acaba de lançar a Turma 4 do Curso DBA PostGIS. Este curso online oferece uma visão completa que vai desde uma revisão sobre o PostgreSQL até tópicos avançados do PostGIS, apresentando como trabalhar em sua totalidade com esta poderosa extensão espacial do banco PostgreSQL.

Este curso é formado pelos nossos cursos PostGIS Básico (16 horas online) + PostGIS Avançado (20 horas online) e acontecerá entre os dias 22 de setembro e 08 de dezembro (aos sábados).

Se você fosse comprar os cursos separadamente sairia pelo valor de R$ 900,00. Porém o curso esta com uma super promoção, e está saindo por apenas R$ 599,00.

Para maiores informações e para ver a ementa completa do curso, acesse:


Categories: OSGeo Planet

Geomatic Blog: 12 years with you

OSGeo Planet - Tue, 2018-08-07 09:22

This blog makes today 12 years old. Yes, we haven’t posted that much lately, indeed we are still around, busy with our projects, families, and lives. We keep sharing some content in our twitter and LinkedIn social networks and will maintain this place for sure. 12 years is already a lot, but our passion for geospatial content and sharing it is not less than when we started this space.

Leave us a comment if you remember a content we created that you liked, or whatever comes to your head!

We love you!!

Photo by Audrey Fretz on Unsplash

Categories: OSGeo Planet

Hands-on to GIS in FOSS: GRASS GIS: Manejo de series de tiempo para aplicaciones en Ecología y Ambiente

OSGeo Planet - Sat, 2018-08-04 14:31
Curso de Postgrado en la Universidad de Rı́o Cuarto del 22 al 26 de Octubre de 2018

Gran parte de las investigaciones en Ecologı́a y ambiente en la actualidad requiere de conocimientos técnicos en el procesamiento avanzado de grandes conjuntos de datos espacio-temporales. Existe de este modo una urgente necesidad de formar a los potenciales usuarios en lo que respecta a formas eficientes de manejar y analizar los grandes volúmenes de datos que las agencias espaciales y diversas otras instituciones ponen a dispocisión del público a diario. Este curso abordará el procesamiento y análisis de datos espacio-temporales con uno de los Sistemas de Información Geográfica (SIG) de código abierto más populares: GRASS GIS 7.


Durante este curso los participantes obtendrán una visión general de las capacidades de GRASS GIS y experiencia práctica en procesamiento de datos ráster, vectoriales y series de tiempo de productos satelitales para análisis ecológicos y ambientales. Además, se presentarán ejemplos de análisis de datos espaciales utilizando la interfaz GRASS – R a través del paquete rgrass7.

El curso, a cargo de la Dra. Verónica Andreo, se dictará entre el 22 y 26 de Octubre de 2018 en el aula de postgrado de la Facultad de Ciencias Exactas, Fı́sico-Quı́micas y Naturales de la Universidad Nacional de Rı́o Cuarto (Córdoba, Argentina).

Informes e inscripciones: cecilia.provensal@gmail.comcprovensal@exa.unrc.edu.ar

Los cupos son limitados, no se lo pierdan!


Categories: OSGeo Planet

gvSIG Team: Presentation and workshop about The Horton Machine (JgrassTools) for gvSIG Desktop

OSGeo Planet - Fri, 2018-08-03 10:18

The Horton Machine, previously known as Jgrasstools, is a set of tools available from the latest gvSIG Desktop version. Thanks to The Horton Machine we have a lot of functionalities in gvSIG, especially useful to work in areas such as hydrology (but not only!).

Such as at ‘About Hydrology’ blog, we want to help to spread the presentation made at FOSS4G-Europe in Guimãraes by HydroloGIS company and the documentation used in the workshop, which will help you to learn how to handle this powerful set of tools.

You can find the presentation here.

And about the workshop material, you have:

Categories: OSGeo Planet

gvSIG Team: Presentación y taller de The Horton Machine (JgrassTools) para gvSIG Desktop

OSGeo Planet - Fri, 2018-08-03 09:42

The Horton Machine, anteriormente conocido como Jgrasstools, es un conjunto de herramientas disponibles desde la última versión de gvSIG Desktop. Gracias a The Horton Machine tenemos en gvSIG mucha funcionalidad y muy diversa, especialmente útil para trabajar en áreas como la hidrología (pero no sólo!!).

Tal y como ha hecho el blog ‘About Hydrology’, queremos ayudar a divulgar la presentación realizada en el FOSS4G-Europe en Guimãraes por la empresa HydroloGIS y la documentación utilizada en el taller impartido, que os ayudará a aprender a manejar este potente conjunto de herramientas.

La presentación la podéis encontrar aquí.

Y en cuanto al material del taller, tenéis:

Categories: OSGeo Planet

gvSIG Team: Sesión AgriGIS de GODAN en las 14as Jornadas Internacionales gvSIG: Presentaciones sobre Agricultura

OSGeo Planet - Fri, 2018-08-03 08:24

Durante las 14as Jornadas Internacionales gvSIG se realizará una sesión AgriGIS de GODAN con presentaciones relacionadas con Agricultura, en la que os invitamos a presentar vuestros proyectos.

Casi 800 millones de personas luchan contra el hambre y la desnutrición en todos los rincones del mundo. Concretamente es una de cada nueve personas, y la mayoría son mujeres y niños. Estamos convencidos de que la solución al Hambre Cero se basa en los datos de agricultura y nutrición existentes, que a menudo no están disponibles. Global Open Data para Agricultura y Nutrición (GODAN) es una iniciativa que busca apoyar los esfuerzos globales para hacer que los datos agrícolas y sobre nutrición relevantes estén disponibles y accesibles, y puedan ser utilizados sin restricciones en todo el mundo.

La iniciativa se centra en la construcción de políticas de alto nivel, así como el apoyo institucional público y privado para datos abiertos. Los datos geoespaciales abiertos y las herramientas geo abiertas son clave para apoyar y lograr la agenda 2030 para la seguridad alimentaria mundial.

Os invitamos a presentar vuestros proyectos sobre AgriGIS, e investigaciones y ejemplos sobre FoodSecurity. Para ello solo debéis enviar vuestros resúmenes a través de la sección de Comunicaciones de la web antes del 11 de septiembre de 2018.


Categories: OSGeo Planet

gvSIG Team: GODAN AgriGIS session at 14th International gvSIG Conference: presentations in Agriculture theme

OSGeo Planet - Fri, 2018-08-03 08:21

During the 14th International gvSIG Conference there will be a GODAN AgriGIS session with presentations in Agriculture theme, where we invite you to present your projects.

Nearly 800 million people struggle with debilitating hunger and malnutrition in every corner of the globe. That’s one in every nine people, with the majority being women and children. We are convinced that the solution to Zero Hunger lies within existing, but often unavailable, agriculture and nutrition data. Global Open Data for Agriculture and Nutrition (GODAN) is an initiative that seeks to support global efforts to make agricultural and nutritionally relevant data available, accessible, and usable for unrestricted use worldwide.

The initiative focuses on building high-level policy as well as public and private institutional support for open data. Open Geospatial data and open geo tools are key in supporting and achieving the 2030 agenda for Global Food Security.

Your contributions and inputs on AgriGIS, FoodSecurity research and examples are welcome. Please send your abstracts through the Communications section of the website by 11th September 2018.

Categories: OSGeo Planet

MapTiler: Highlights from the OpenStreetMap conference

OSGeo Planet - Fri, 2018-08-03 07:00

State of the Map (SotM) is an annual international conference which brings together stakeholders in the OpenStreetMap ecosystem. The attendees vary from enthusiastic mappers, software developers, academicians, open-data evangelists, NGOs to companies using the geodata in their applications. But what connects all of them is a passion for open maps.

State of the Map 2018

This year, SotM took place between 28.-30. July in Milan, Italy. The event was hosted by the Polytechnic University of Milan in their beautiful campus on Piazza Leonardo da Vinci. Our presentation about OpenMapTiles become a vanguard for one of the most discussed topics of the whole conference: vector tiles.

Vector tiles are the future

The common thread of the whole conference was vector tiles. Started by our presentation and followed by others which showed how vector tiles are used in a real-world deployment like the use case of the Helsinki Regional Transport Authority.

However, the most important discussion was about vector tiles on the main page OpenStreetMap.org. While nothing is set in stone yet, there was a general agreement that the main page should slowly switch to vector tiles.

Vector tiles used by the Helsinki Regional Transport Authority

Vector tiles used by the Helsinki Regional Transport Authority Data quality

Finding errors, preventing vandalism, reverting malicious edits. The search for an ideal system for keeping the data quality high is still ongoing. Currently, there are many useful tools and we can expect many other automated mechanisms for preventing data degradation, but none of them is fully capable of replacing a human editor (yet?).

A new data model for OpenStreetMap

A topic touched by a few speakers as well as discussed in the OpenHistoryMap debate. The current data model was created some time ago and has some issues like the inability to use two values for one key, complicated way of working with relations and redundancy.

While some speakers suggest just a simplification by using a phrase “Evolution, no revolution” others call for a radically different model similar to the one on Wikidata.

Evolution, no revolution approach of Jochen Topf

"Evolution, no revolution" approach of Jochen Topf Public transport

Since the beginning of the OpenStreetMap, the public transport infrastructure was mapped on nodes. It was simple but fails to describe complex situations and causes problems for routing. Therefore schema v2 was created, which fixed those issues, but the price was high complexity for mappers. Consequently, the new public transport schema v3 is in the proposal stage right now. 

However, the issue of public transport is more complicated because of informal systems in the global south or former Soviet Union countries. Few talks lifted the topic of mapping in developing countries as well as many discussions were focused on how to map them and even if public transport data belongs to OpenStreetMap.

Humanitarian mapping

Unlike commercial maps, OpenStreetMap is not focused just on the most profitable parts of the world. There are projects like Humanitarian OpenStreetMap Team, Missing Maps and others which maps in the developing world to give a voice to the underrepresented communities. Thanks to the grants from the OpenStreetMap Foundation, many representatives of the local communities were able to make it to the conference and increased the diversity of the conference.

Talking about diversity, the topic of gender representation in the OpenStreetMap was reflected as well with few proposals on how to improve the situation, but the solution is a long-distance run.

Photo ©Francesco Giunta CC-BY-SA 4.0

Photo ©Francesco Giunta CC-BY-SA 4.0 Social event

Probably the most relaxing part of the whole conference. Great atmosphere, delicious food, good music and a lot of friendly people open to chatting about maps and beyond.

Big thank organizers and see you next year!

Since this year’s SotM is over, we would like to say big thank you to all organizers, the OpenStreetMap Foundation and all speakers for making such an outstanding event possible.

In case you missed any interesting talk, you can find most of them on YouTube including ours about generating own vector tiles from OpenStreetMap data and slides on SlideShare.

See you all next year at SotM 2019 in Heidelberg!

Categories: OSGeo Planet

Jackie Ng: Announcing: MapGuide Maestro 6.0m10 and a new build of Fusion

OSGeo Planet - Thu, 2018-08-02 16:44
This announcement is a double-whammy because they are both somewhat intertwined.

Firstly, we'll start with a new build of Fusion that cleans up several aspects of the viewer framework. These changes in Fusion are slated for inclusion in a future release of MapGuide

New PHP entry point for fusion templates

This new build of Fusion introduces a new PHP entry point for the 5 fusion templates.

Instead of loading say ... the slate template like so:


You can now load it like so:


This entry point supports all the same query string parameters as the original templates but with support for the following additional parameters:
  • template (required) - The name of the template to load 
  • debug (optional) - If set to 1, the entry point will use fusion.js instead of fusionSF-compressed.js, making the whole debug/development process way more simpler.
The entry point also fetches the application definition and writes it out to the underlying template as JSON, avoiding the need for the initial round trip to fetch this document among other things.

Fixing external base layer support 

The way external base layers are currently supported in Fusion is a bit crufty which this build also addresses:
  • Fusion no longer attempts to append script tags to support Google Maps / Bing / OSM / Stamen. The way we do it is unsafe according to modern browsers. That is now the responsibility of the new entry point.
  • For Bing / OSM / Stamen, we no longer load external helper scripts at all. This is because such support is already present in OpenLayers itself and the external scripts are merely convenience wrappers that we can easily implement ourselves in Fusion.
  • Finally, XYZ layers are now properly supported in Fusion. This means you can do things like consuming OSM tiles from your own OSM tile server, or maybe use non-watermarked CycleMap/TransportMap layers, or you can finally be able to consume your own MapGuide XYZ Tile Set Definitions.
Now the reason this announcement is a double-header is because these changes in Fusion really needs tooling assistance to best take advantage of these changes, so here's also a new release of MapGuide Maestro as well. Here's the notable changes.

Fusion XYZ layer editor support

Now that we have proper XYZ tile layer support in Fusion we now have support for consuming external XYZ tile sets.

Because MapGuide XYZ Tile Sets Definitions can now be consumable with Fusion, you can specify a tile set URL for such a Tile Set Definition by clicking the new Add from XYZ Tile Set Definition toolbar button.

If you want to consume OSM tiles from your own OSM tile server you can use this editor and point it to your OSM tile server with the requisite ${x}, ${y} and ${z} placeholders.

RtMapInspector tool improvements 

The other major feature of this new release is improvements to the RtMapInspector.

For a refresher, the RtMapInspector tool was introduced way back in a beta release of Maestro 5.0 and the purpose of this tool was to easily inspect the state of any runtime map if you know its session id and map name so you can easily debug map manipulation and state updates. Did your MapGuide application code correctly added your new layer? You can use this tool to inspect the map state and find out.

Having looked at this tool more recently, I've come to realise that I'm only skimming the surface of what this tool is capable of and with this release, the capabilities have been significantly expanded.

For example, the tool now lets you see the map image for the current map state

But it also occurred to me if we're inspecting a map, we can (and should be able to) also inspect its layers (and their feature sources) too! So when you select any layer in this tool, a new Inspect Layer button is enabled.

Clicking it will bring up an inspection dialog that shows the layer's XML and be able to interact with its feature source using the same components as the local feature source preview.

Other Changes
  • Now requires .net Framework 4.7.1. The windows installer will check for this (and install if required)
  • Maestro API now uses the latest stable releases of NetTopologySuite and GeoAPI
  • The MgTileSeeder tool now targets .net Core 2.1

Download MapGuide Maestro
Download test build of Fusion
Categories: OSGeo Planet

gvSIG Team: gvSIG Black: New icon (and information) set to customize gvSIG Desktop

OSGeo Planet - Thu, 2018-08-02 08:57

Do you want to change the appearance of gvSIG Desktop? Then keep reading …

One of the novelties included in gvSIG 2.4 version was a plugin that allows you to generate your own icon sets and apply them on the gvSIG Desktop interface. It allows to change the gvSIG style as well as to have icons in different sizes. Apart from the ‘classic’ icons theme (16×16 pixels) in gvSIG Desktop 2.4, from the Add-ons Manager we were able to install an icon set made by TreCC, available in 16×16 and 22×22 pixels.

Currently the gvSIG Association team is working on the next 2.4.1 version (already in stabilization phase) and in parallel in 3.0 version, a version that will bring important changes, including some improvements related to usability and aesthetics of the application. Related to this issues we have been reviewing aspects such as the icons distribution, icons used by several tools, etc. and the best way to do it has been to apply a new icon set, which will allow us to review the current status of this section in gvSIG Desktop as a proof of concept. And although the motivation has been to perform this test, the results is a new icon theme ready to be used in the application, that you can already find in the add-ons manager. The name of this plugin is ‘gvSIG Black‘ and the icon resolution is 24×24 pixels.

How to install it:

  • Go to the ‘Add-ons Manager’ and mark the ‘Installation from URL’ option. Then search and install the ‘gvSIG Black 24×24 Icon Theme’ plugin. Once installed, we must restart gvSIG.
  • From ‘Preferences’, ‘General’ section, ‘Game of icons’ subsection, we select ‘gvSIG Black’. We restart gvSIG and we will see that it has already been applied.
    If you want to return to the ‘classic’ icon set you just have to repeat the process, installing the ‘Classic Icon Theme’ plugin.

In addition we share a document that will help those who want to generate their own icon set to give their own appearance to gvSIG Desktop. The document contains the main icons that are used in your favourite GIS, with images of the 3 icons set currently available, and the path in which each icon is saved.

And if you want to see all the gvSIG Desktop icons, you just have to launch the tool that generates the corresponding report from ‘Tools / Development / Show icon theme information’.

Categories: OSGeo Planet
Syndicate content