OSGeo Planet

Fernando Quadro: Desenvolvimento de plugins do QGIS: Plugin AttributeTransfer

OSGeo Planet - Thu, 2018-12-06 10:30

Nesse link você pode acessar todo código-fonte do plugin QGIS AttributeTransfer.

O plugin em si se encontra no arquivo attribute_transfer.py. Quando o método run() é invocado, o formulário QT aparece com combos pré-preenchidos com camadas vetoriais disponíveis que suportam a edição de atributos.

Os combos de camada de origem e destino são mutuamente exclusivos, portanto, não é possível transferir o atributo dentro da mesma camada.

Codificando o plugin, é possível se deparar com pequenos problemas relacionados principalmente à implementação da QgsSpatialIndex. Na parte de análise do vizinho mais próximo, o método QgsSpatialIndex.nearestNeighbor foi mencionado. No entanto, esse método funciona apenas com geometrias do tipo QgsPoint. Com ele é impossível de obter geometrias do tipo QgsPolygon ou QgsPolyline, no entanto. O que podemos fazer, para lidar com esse problema? Bem… desenhe uma matriz de solução.

Usar o índice espacial traz mais um problema que foi percebido logo após a implementação dos fluxos de trabalho de comparação espacial para diferentes tipos de geometria. Há uma chance de encontrar o recurso mais próximo usando a caixa delimitadora que, na verdade, não é o recurso mais próximo. Nesse caso, optou-se por encontrar o vértice mais distante de tal recurso e utilizá-lo para construir o retângulo ao redor do recurso de destino. Se houver algum recurso de origem em tal retângulo, é muito provável que um deles seja o recurso real mais próximo.

Você pode ter pensado que esta seria a última parte da série. Mas como alguém poderia reivindicar o projeto de codificação sem escrever testes? Fique ligado no próximo post.

Categories: OSGeo Planet

Fernando Quadro: Desenvolvimento de plugins do QGIS: Criando interface gráfica

OSGeo Planet - Wed, 2018-12-05 10:30

Depois de mexer no console do QGIS Python e implementar a análise de vizinho mais próximo, vamos criar uma GUI muito simples para o plug-in.

Embora os documentos da API do QGIS levassem algumas horas para serem entendidos, o ecossistema do PyQGIS me ajudou muito. Aqui vem a lista de ferramentas que você deve incorporar ao seu processo de desenvolvimento o mais rápido possível.

1. CONSTRUTOR DE PLUG-INS

O QGIS Plugin Builder é um plugin criado para criar outros plugins. Ele permite você criá-lo em minutos e que você codifique em vez de configurar coisas que você não quer configurar. Note que você deve colocar o plugin dentro da pasta de plugins do QGIS (o padrão é ~/.qgis2/python/plugins) no Linux.

Lembre-se de rodar pyrcc4 -o resources.py resources.qrc dentro da pasta do seu plugin antes de adicioná-lo ao QGIS.

2. PLUGIN RELOADER

O QGIS Plugin Reloader é um plugin (possivelmente criado com o QGIS Plugin Builder) que permite que você atualize (reload) o seu plugin enquanto você codifica. Nenhuma reinicialização do QGIS é necessária.

3. QT DESIGNER

O Qt Designer vem com o pacote qt4-designer no Ubuntu. Ele é feito sob medida para projetar e construir GUIs a partir de componentes do Qt que podem ser usados ​​dentro do QGIS. Sua interface de arrastar e soltar permite prototipar rapidamente.

Graças ao Plugin Builder, você pode carregar o arquivo attribute_transfer_dialog_base.ui diretamente no Qt Designer e ajustá-lo às suas necessidades.

Não é preciso muito, apenas um QLineEdit e alguns QComboBoxwidgets. Eles estarão disponíveis no arquivo attribute_transfer.py como propriedades da classe AttributeTransferDialog.

O nome do widget pode ser personalizado na barra lateral direita e eu aconselho você a fazer isso. Eu escolhi o seguinte:

Uma vez carregado com Plugins -> Gerenciar e Instalar Plugins -> AttributeTransfer, o plugin está disponível diretamente na barra de ferramentas ou no menu Vetor. Está faltando a lógica de negócios, mas nós já fizemos isso no post anterior. Tudo o que deve ser fazer é unir essas duas partes.

Categories: OSGeo Planet

Fernando Quadro: Desenvolvimento de plugins do QGIS: Vizinho mais próximo

OSGeo Planet - Tue, 2018-12-04 10:30

No último post foi descrito o básico da manipulação de camadas vetoriais. Como o nosso objetivo é criar um plug-in personalizado totalmente funcional com base em uma distância, gostaria de discutir a indexação espacial e a análise do vizinho mais próximo.

A figura acima ilustra a tarefa que pode ser resolvida apenas usando a API do QGIS. Imagine que você tenha uma camada de origem com um atributo preenchido com valores. Você recebe uma camada de destino também, infelizmente, os valores nesta camada estão faltando (não tão raros no mundo GIS, certo?). No entanto, você sabe que o valor ausente de cada recurso na camada de destino pode ser preenchido pelo valor de seu vizinho mais próximo da camada de origem. Como você faz isso?

1. GERANDO DADOS FICTÍCIOS

Vamos criar dois conjuntos de dados na memória com atributos id e value. Ambos terão dez recursos.

from qgis.core import QgsMapLayerRegistry, QgsVectorLayer, QgsFeature, QgsGeometry, QgsPoint, QgsSpatialIndex from qgis.utils import iface source_layer = QgsVectorLayer("point?crs=epsg:4326&field=id:integer&field=value:integer", "Source layer", "memory") target_layer = QgsVectorLayer("point?crs=epsg:4326&field=id:integer&field=value:integer", "Target layer", "memory") def create_dummy_data(): source_layer.startEditing() target_layer.startEditing() feature = QgsFeature(source_layer.pendingFields()) for i in range(10): feature.setGeometry(QgsGeometry.fromPoint(QgsPoint(i, i))) feature.setAttribute("id", i) feature.setAttribute("value", i) source_layer.addFeature(feature) feature = QgsFeature(source_layer.pendingFields()) for i in range(10): feature.setGeometry(QgsGeometry.fromPoint(QgsPoint(i + i, i))) feature.setAttribute("id", i) target_layer.addFeature(feature) source_layer.commitChanges() target_layer.commitChanges() QgsMapLayerRegistry.instance().addMapLayer(source_layer) QgsMapLayerRegistry.instance().addMapLayer(target_layer) create_dummy_data()

2. ESCREVENDO VALORES DO VIZINHO MAIS PRÓXIMO

A análise real do vizinho mais próximo pode ser feita em dez linhas de código! Primeiro, o qgis.core.QgsSpatialIndex é construído a partir de todos os recursos source_layer. Então, você itera sobre os recursos target_layer e para cada um deles, obtém apenas um ( nearestNeighbor(f.geometry().asPoint(), 1)[0]) vizinho mais próximo. Por fim, você apenas grava o valor do atributo do vizinho mais próximo na camada de destino e confirma as alterações.

def write_values_from_nn(): source_layer_index = QgsSpatialIndex(source_layer.getFeatures()) source_layer_features = {feature.id(): feature for (feature) in source_layer.getFeatures()} target_layer_features = target_layer.getFeatures() target_layer.startEditing() for f in target_layer_features: nearest = source_layer_index.nearestNeighbor(f.geometry().asPoint(), 1)[0] value = source_layer_features[nearest].attribute("value") target_layer.changeAttributeValue(f.id(), 1, value) target_layer.commitChanges() write_values_from_nn()

3. O QUE VEM POR AI

Estamos um pouco mais perto do nosso objetivo. O que esta faltando?

  • Verificações de capacidades: a camada de destino suporta edições? Verifique os recursos do provedor de dados da camada para descobrir.
  • Registro do usuário: avisos ou erros estão completamente ausentes. Será ótimo tê-los apresentados dentro do qgis.gui.QgsMessageBar.
  • Atributos customizados: essa versão espera que as duas camadas tenham o mesmo atributo com o mesmo tipo de dados.
  • GUI : um widget PyQt transformará esse script baseado em console em um plug-in personalizado. Isso é o que vamos ver no próximo post.
Categories: OSGeo Planet

GIS for Thought: Copying Rasters in PostGIS

OSGeo Planet - Tue, 2018-12-04 09:00

I ran into a process where I wanted to create copies of rasters in PostgreSQL. While seemingly a simple process this took me a bit of work to figure out.

For my workflow I had three rasters, which all have the same size, and I want to load them into the same PostGIS table with three raster geometry columns. I don’t think this will work for different sized rasters since the rid’s will not match.

Three rasters:
raster1
raster2
raster3

Which I want to copy into:
merged_raster

First to create the merged raster table:

CREATE TABLE merged_raster ( rid serial NOT NULL, raster1 raster, raster2 raster, raster3 raster );

Then to add the rid’s. These are the id’s of the tiles that the raster was split into when loading. If your tile size is large enough then you may only have one.

INSERT INTO merged_raster(rid) (SELECT rid FROM raster1);

Then copying the actual data is straighforward (this assumes the raster column in the raster1 datasets is called rast):

UPDATE merged_raster m SET raster1 = r.rast FROM raster1 r WHERE r.rid = m.rid; UPDATE merged_raster m SET raster2 = r.rast FROM raster2 r WHERE r.rid = m.rid; UPDATE merged_raster m SET raster3 = r.rast FROM raster3 r WHERE r.rid = m.rid;

Now I still have an issue that QGIS will not load these layers. It will always load the initial raster column no matter what is chosen.

Categories: OSGeo Planet

QGIS Blog: User question of the Month – Dec 18 & answers from Nov

OSGeo Planet - Mon, 2018-12-03 20:59

It’s December and that means it is time to plan for the next year. Planning also means preparing a budget and to do so, we would like to learn more about what you think QGIS.ORG should focus on: features or bug fixing and polishing? Therefore, we invite you to our QGIS user question of December 2018.

Your answers in November

In November, we wanted to know which version of QGIS you use.

Categories: OSGeo Planet

QGIS Blog: Call for presentations: QGIS User Conference and Developer Meeting, 2019

OSGeo Planet - Mon, 2018-12-03 18:07
The next International QGIS User Conference and Developer Meeting will take place in the week from 4 to 10 March 2019 in A Coruña (Spain).

 

coru The call for presentations and workshop registration is out and you can apply using the online registration form. Note that the deadline for presenting proposals is 21 Dec 2019. If you need any info please email your queries to: userconf2019@qgis.es

 

The International QGIS User and Developer Conference wants to be the reference conference for the QGIS community, a meeting point for the family of users and developers associated with the QGIS project. Attending the conference is an opportunity to gather experience and share knowledge about QGIS. The language of the conference is English.

 

The event is organized by the Spanish QGIS Association [1], the Spanish user group, and the Galician Xeoinquedos community [2] with the help of A Coruña municipality [3]. The event is under the QGIS.org umbrella. We look forward to seeing you there!

 

[1] http://qgis.es/ [2] https://xeoinquedos.wordpress.com/ [3] https://www.coruna.gal/

 

Categories: OSGeo Planet

Fernando Quadro: Desenvolvimento de plugins do QGIS: Usando o console

OSGeo Planet - Mon, 2018-12-03 10:30

Como mencionado na parte anterior da série, o console Python é um ponto de entrada para a automação do fluxo de trabalho GIS dentro do QGIS. Lembre-se de que existe um objeto iface representando a instância qgis.gui.QgisInterface dentro do console que lhe dá acesso a toda a GUI do QGIS. Vamos ver o que podemos fazer dentro do console.

1. CARREGANDO A PASTA DE CAMADAS VETORIAIS

import glob from qgis.core import QgsMapLayerRegistry, QgsVectorLayer def load_folder(folder): VALID_EXTENSIONS = ('.geojson', '.gpkg', '.shp') files = [f for f in glob.glob("{}/*".format(folder)) if f.endswith(VALID_EXTENSIONS)] for f in files: layer = QgsVectorLayer(f, f.split('/')[-1], 'ogr') if not layer.isValid(): iface.messageBar().pushCritical("Failed to load:", f) continue QgsMapLayerRegistry.instance().addMapLayer(layer) load_folder("path/to/your/vector/files/folder")
  • QgsMapLayerRegistry representa o painel de camadas presente na GUI do QGIS
  • iface.messageBar() retorna a barra de mensagens do aplicativo principal e permite notificar o usuário sobre o que está acontecendo
  • QgsVectorLayer representa uma camada vetorial com seus conjuntos de dados vetoriais subjacentes

2. EDIÇÃO DA TABELA DE ATRIBUTOS DA CAMADA ATIVA

O código a seguir demonstra a possibilidade de editar a tabela de atributos da camada vetorial por meio do console.

  • Qualquer atributo a ser escrito tem que vir na forma de um qgis.core.QgsField – isso é mais ou menos um encapsulamento de um nome de atributo e seu tipo (PyQt4.QtCore.QVariant)
  • O provedor de dados deve ser capaz de realizar a atribuição ( caps & QgsVectorDataProvider.AddAttributes)
  • QgsVectorLayer.addAttribute – método que retorna um booleano ao invés de lançar uma exceção
from qgis.core import QgsField from qgis.gui import QgsMessageBar from PyQt4.QtCore import QVariant def edit_active_layer(attr_name, attr_type): layer = iface.activeLayer() caps = layer.dataProvider().capabilities() if caps & QgsVectorDataProvider.AddAttributes: layer.startEditing() if layer.addAttribute(QgsField(attr_name, attr_type)): iface.messageBar().pushMessage("Attribute {0} was successfully added to the active layer.".format(attr_name), QgsMessageBar.SUCCESS) layer.commitChanges() else: iface.messageBar().pushMessage("Attribute {0} was not added. Does it already exist?".format(attr_name), QgsMessageBar.CRITICAL) layer.rollBack() edit_active_layer("new_string_attribute", QVariant.String)

3. CRIANDO UMA NOVA CAMADA VETORIAL

É possível criar uma camada vetorial totalmente nova com o console Python. Abaixo, é apresentada uma função muito simples (create_new_layer), mas espero que você possa imaginar as maneiras como ela pode ser ajustada.

from qgis.core import QgsField, QgsFields, QgsVectorLayer, QgsFeature, QgsGeometry, QgsPoint from PyQt4.QtCore import QVariant def create_new_layer(): filename = "/path/to/your/vector/file.gpkg" fields = QgsFields() fields.append(QgsField("attr1", QVariant.String)) fields.append(QgsField("attr2", QVariant.Int)) file = QgsVectorFileWriter( filename, "UTF8", fields, QGis.WKBPoint, QgsCoordinateReferenceSystem(4326), "GPKG" ) layer = QgsVectorLayer(filename, filename.split("/")[-1], "ogr") QgsMapLayerRegistry.instance().addMapLayer(layer) if not layer.dataProvider().capabilities() & QgsVectorDataProvider.AddAttributes: pass feature = QgsFeature(layer.pendingFields()) feature.setGeometry(QgsGeometry().fromPoint(QgsPoint(0, 0))) feature.setAttribute("attr1", "attr1") feature.setAttribute("attr2", 2) layer.startEditing() if layer.addFeature(feature, True): layer.commitChanges() else: layer.rollBack() iface.messageBar().pushMessage("Feature addition failed.", QgsMessageBar.CRITICAL) create_new_layer()

Esses foram apenas alguns exemplos do que pode ser feito com a API do QGIS e o console Python. No próximo post, vamos nos concentrar nas associações espaciais dentro do QGIS.

Categories: OSGeo Planet

Free and Open Source GIS Ramblings: Movement data in GIS #17: Spatial analysis of GeoPandas trajectories

OSGeo Planet - Sun, 2018-12-02 16:08

In Movement data in GIS #16, I presented a new way to deal with trajectory data using GeoPandas and how to load the trajectory GeoDataframes as a QGIS layer. Following up on this initial experiment, I’ve now implemented a first version of an algorithm that performs a spatial analysis on my GeoPandas trajectories.

The first spatial analysis algorithm I’ve implemented is Clip trajectories by extent. Implementing this algorithm revealed a couple of pitfalls:

  • To achieve correct results, we need to compute spatial intersections between linear trajectory segments and the extent. Therefore, we need to convert our point GeoDataframe to a line GeoDataframe.
  • Based on the spatial intersection, we need to take care of computing the corresponding timestamps of the events when trajectories enter or leave the extent.
  • A trajectory can intersect the extent multiple times. Therefore, we cannot simply use the global minimum and maximum timestamp of intersecting segments.
  • GeoPandas provides spatial intersection functionality but if the trajectory contains consecutive rows without location change, these will result in zero length lines and those cause an empty intersection result.

So far, the clip result only contains the trajectory id plus a suffix indicating the sequence of the intersection segments for a specific trajectory (because one trajectory can intersect the extent multiple times). The following screenshot shows one highlighted trajectory that intersects the extent three times and the resulting clipped trajectories:

This algorithm together with the basic trajectory from points algorithm is now available in a Processing algorithm provider plugin called Processing Trajectory.

Note: This plugin depends on GeoPandas.

Note for Windows users: GeoPandas is not a standard package that is available in OSGeo4W, so you’ll have to install it manually. (For the necessary steps, see this answer on gis.stackexchange.com)

The implemented tests show how to use the Trajectory class independently of QGIS. So far, I’m only testing the spatial properties though:

def test_two_intersections_with_same_polygon(self): polygon = Polygon([(5,-5),(7,-5),(7,12),(5,12),(5,-5)]) data = [{'id':1, 'geometry':Point(0,0), 't':datetime(2018,1,1,12,0,0)}, {'id':1, 'geometry':Point(6,0), 't':datetime(2018,1,1,12,10,0)}, {'id':1, 'geometry':Point(10,0), 't':datetime(2018,1,1,12,15,0)}, {'id':1, 'geometry':Point(10,10), 't':datetime(2018,1,1,12,30,0)}, {'id':1, 'geometry':Point(0,10), 't':datetime(2018,1,1,13,0,0)}] df = pd.DataFrame(data).set_index('t') geo_df = GeoDataFrame(df, crs={'init': '31256'}) traj = Trajectory(1, geo_df) intersections = traj.intersection(polygon) result = [] for x in intersections: result.append(x.to_linestring()) expected_result = [LineString([(5,0),(6,0),(7,0)]), LineString([(7,10),(5,10)])] self.assertEqual(result, expected_result)

One issue with implementing the algorithms as QGIS Processing tools in this way is that the tools are independent of one another. That means that each tool has to repeat the expensive step of creating the trajectory objects in memory. I’m not sure this can be solved.

Categories: OSGeo Planet

Cameron Shorter: OSGeoLive in 4 minutes

OSGeo Planet - Fri, 2018-11-30 21:16

Here is our OSGeoLive lightning talk, stripped back to 4 minutes, and presented at the FOSS4G-Oceania conference. OSGeoLive talk starts at 8mins 15secs.
The shortened slide deck and notes are here.
Categories: OSGeo Planet

GIScussions: Sh*thole Geography Standup Quiz

OSGeo Planet - Fri, 2018-11-30 17:05

Shithole President - DSC07529_ep

Here is a little bit of fun for you to enjoy at a staff social, after your Xmas lunch or in the pub one evening

This is how the Sh*thole Geography Standup Quiz works.

“At the beginning everyone stands up. 

Quizmaster asks a question with 3 suggested answers.

If you think the answer is the first one put your left hand up, if you think it is the second one both hands up, if you think it is the third one put your right hand up. 

QM tells you the correct answer, everyone who got it wrong sit down. If you didn’t make a choice then you also sit down. There are some other interesting stats in the speaker notes for each slide (you need to download the presentation)

Then we go again until one person is left standing – if everyone gets knocked out you can all stand up again and keep answering questions.

There is a tie breaker question at the end”

And you could ignore all my instructions and just have a fun time after Xmas lunch or even do the quiz on your own and see how many questions you can answer, you could tweet your score with the hashtag #ShitholeQuiz

What’s the point of the maps at the end of each slide/question? I’m not sure! The quiz was first presented at FOSS4G with Ken and we had a cartographic spin to the whole talk, I changed it to become a bit more philosophical – maybe the maps disprove some of our preconceptions. 



Don’t press the autoplay button on the quiz or it will go into auto advance mode which is not the ideal way to run the quiz, use the > button to advance and the < to go backwards if needed. If you want to download the quiz or access directly you can find the quiz here

Categories: OSGeo Planet

Even Rouault: SRS barn raising: 6th report

OSGeo Planet - Fri, 2018-11-30 14:05
This is the sixth progress report of the GDAL SRS barn effort. The pace of changes has not yet slow down, with still a significant part of the work being in PROJ, and an initial integration in GDAL.
The major news item is that RFC2, implementing the new capabilities (WKT-2 support, late-binding approach, SQLite database), has now been merged into PROJ master.
An initial integration of PROJ master into GDAL has been started in a custom GDAL branch . This includes:
  • PROJ master, which will be released as 6.0, is now a required dependency of GDAL. It actually becomes the only required external third-party dependency (if we except the copies of a few libraries, such as libtiff, libgeotiff, etc. that have been traditionaly included into the GDAL source tree)
  • The dozen of continuous integration configurations have been modified to build PROJ master as a preliminary step.
  • Related to the above, we have including into PROJ a way to "version" its symbols. If PROJ is built with -DPROJ_RENAME_SYMBOLS in CFLAGS and CXXFLAGS, all its exported symbols are prefixed with "internal_". This enables GDAL to link against PROJ master, while still using pre-compiled dependencies (such as libspatialite) that link against the system PROJ version, without a risk of symbol clash. This is particularly useful to be able to run GDAL autotests on continuous integration environments that use pre-packaged dependencies (or if you want to test the new GDAL without rebuilding all reverse dependencies of GDAL). This however remains a hack, and ultimately when PROJ 6 has been released, all reverse dependencies should be built against it. (this solution has been successfully tested in the past where GDAL had a libtiff 4.0 internal copy, whereas external libtiff used by some GDAL dependencies relied on the system libtiff 3.X)
  • Compatibility mechanisms which were required to support older PROJ versions have been removed. In particular, the runtime loading (using dlopen() / LoadLibrary() mechanism) has been removed. It proved to cause code complication, and users frequently ran into headaches with different PROJ versions being loaded and clashing/crashing at runtime.
  • The OGRSpatialReference class which implements CRS manipulation in GDAL has been modified to use the new PROJ functions to import and export between WKT and PROJ strings. Previously GDAL had such code, which is now redundant with what PROJ offers. This preliminary integration caused a number of fixes to be made on PROJ to have compatibility with the input and output of GDAL for WKT.1 and PROJ strings. Besides "moving" code from GDAL to PROJ, a practical consequence is that the addition of a new projection method into PROJ will no longer require changes to be made to GDAL for it to be usable for reprojection purposes.

There have been reflections on how to use the new code developped in PROJ by the existing PROJ code. A pull request is currently under review and implements:
  • changes needed to remove from the data/ directory the now obsolete EPSG, IGNF, esri and esri.extra files to rely instead of the proj.db dataase 
  • making the proj_create_crs_to_crs() API use the new late-binding approach to create transformation pipelines
  • updating cs2cs to use that new API. 
  • list and address backward compatibility issues related to honouring official axis order
Integration phase in GDAL continus with the aim of using more of the new PROJ code. Typically the OGRSpatialReference class that models in GDAL the CRS/SRS was up to now mostly a hierarchy of WKT nodes, where setters methods of OGRSpatialReference would directly create/modify/delete nodes, and getter methods query them. This approach was fine when you had to manage just one WKT version (with the caveat that it was also easy to produce invalid WKT representions, lacking mandatory nodes). However, this is no longer appropriate now that we want to support multiple WKT versions. Our goal is to make OGRSpatialReference act rather on osgeo::proj::CRS objects (and its derived classes). Switching between the two abstractions is a non-trivial task and doing it in a bing-bang approach seemed risky, so we are progressively doing it by using a dual internal modelling. A OGRSpatialReference instance will maintain as a primary source a osgeo::proj::CRS object, and for operations not yet converted to the new approach, will fallback to translating it internally to WKT.1 to allow direct manipulation of the nodes, and then retranslate that updated WKT.1 representation back to a osgeo::proj::CRS object. Ultimately the proportion of methods using the fallback way should decrease (it is not completely clear we can remove all of them since direct node manipulation is spread in a significant number of GDAL drivers). The task is slowly progressing, because each change can subtely modify the final WKT.1 representation (nodes being added, number of significant digits changing) and cause a number of unit tests to break (GDAL autotest suite is made of 280 000 lines of Python code) and be analyzed to see if there was a bug and or just an expected result to be slightly altered.Because of all the above impacts, we have decided to do an early release in December of GDAL master as GDAL 2.4.0 with all the new features since GDAL 2.3, in order to be able to land this PROJ integration afterwards afterwards. A GDAL 2.5.0 release will hopefully follow around May 2019 with the result of the gdalbarn work.

Other side activities regarding collecting transformation grids:
  • Following a clarification from IGN France on the open licensing of their geodesy related resources, their CRS and transformation XML registry is now processed to populate the IGNF objects in the proj.db database (the previous import used the already processed IGNF file containing PROJ string, which caused information losses). The associated vertical shift grids have also been converted from their text-based format to the PROJ digestable .gtx format, integrated in the proj-datumgrid-europe package, and they have been referenced in the database for transformations that use them.
  • The NGS GEOID 2012B vertical grids to convert between NAD83 ellipsoidal heights and NAVD88 heights have also been integrated in the proj-datumgrid-north-america package
Categories: OSGeo Planet

Free and Open Source GIS Ramblings: TimeManager 3.0.2 released!

OSGeo Planet - Thu, 2018-11-29 17:34

Bugfix release 3.0.2 fixes an issue where “accumulate features” was broken for timestamps with milliseconds.

If you like TimeManager, know your way around setting up Travis for testing QGIS plugins, and want to help improve TimeManager stability, please get in touch!

Categories: OSGeo Planet

Fernando Quadro: Desenvolvimento de plugins do QGIS: Primeiros passos

OSGeo Planet - Thu, 2018-11-29 11:33

O QGIS é uma ferramenta brilhante para automação baseada em Python em forma de scripts personalizados ou até mesmo plugins. Os primeiros passos para escrever o código personalizado podem ser um pouco difíceis, já que você precisa entender a API do Python que é um pouco complexa. A série de Desenvolvimento de Plugins do QGIS que inicia hoje tem como objetivo o desenvolvimento de um plug-in personalizado totalmente funcional capaz de gravar valores de atributos de uma camada de origem para uma camada de destino com base em sua proximidade espacial.

Nesta parte, mencionarei o básico, o que é bom saber antes de começar.

1. Documentação

Diferentes versões do QGIS vêm com diferentes APIs do Python. A documentação deve ser encontrada em https://qgis.org, sendo a mais recente a versão 3.2. Observe que, se você acessar diretamente o site http://qgis.org/api/, verá os documentos atuais.

Alternativamente, você pode o “apt install qgis-api-doc” em seu sistema Ubuntu e rodar “python -m SimpleHTTPServer[port]” dentro de “/usr/share/qgis/doc/api”. Você encontrará a documentação em “http://localhost:8000” (se você não fornecer o número da porta) e estará disponível mesmo quando estiver off-line.

2. Estrutura Básica da API

Abaixo uma visão resumida do que está disponível dentro da API:

  • Pacote qgis.core traz todos os objetos básicos como QgsMapLayer, QgsDataSourceURI, QgsFeature, etc
  • O pacote qgis.gui traz elementos GUI que podem ser usados ​​dentro do QGIS como QgsMessageBar ou QgsInterface
  • qgis.analysis, qgis.networkanalysis, qgis.server e qgis.testing pacotes que não serão abordados na série
  • Módulo qgis.utils que vem com iface (muito útil no console do QGIS Python)

3. Console Python do QGIS

Usar o console Python é a maneira mais fácil de automatizar o fluxo de trabalho do QGIS. Pode ser acessado pressionando Ctrl + Alt + P ou navegando pelo menu em Plugins -> Python Console. Como mencionado acima o módulo iface do qgis.utils é exposto por padrão dentro do console, permitindo-lhe interagir com o QGIS GUI. Experimente os exemplos a seguir, para fazer um teste inicial:

iface.mapCanvas().scale() # retorna a escala atual do mapa iface.mapCanvas().zoomScale(100) # zoom para escala 1:100 iface.activeLayer().name() # obtém o nome da camada ativa iface.activeLayer().startEditing() # realizar edição

Essa foi uma breve introdução à API do QGIS, no próximo post falaremos mais profundamente sobre o console do QGIS.

Categories: OSGeo Planet

Fernando Quadro: Alteração do JTS no GeoServer

OSGeo Planet - Wed, 2018-11-28 14:05

A partir do GeoServer 2.14, a saída produzida pelo recurso REST para as solicitações de camadas estão usando um nome de pacote diferente (org.locationtech em vez de com.vividsolutions) para os tipos de geometria devido ao upgrade para o JTS (Java Topology Suite) 1.16.0. Por exemplo:

Antes:

... <attribute>   <name>geom</name>   <minOccurs>0</minOccurs>   <maxOccurs>1</maxOccurs>   <nillable>true</nillable>   <binding><strong>com.vividsolutions.jts.geom.Point</strong></binding> </attribute> ...

Depois:

... <attribute>   <name>geom</name>   <minOccurs>0</minOccurs>   <maxOccurs>1</maxOccurs>   <nillable>true</nillable>   <binding>org.locationtech.jts.geom.Point</binding> </attribute> ...

Qualquer cliente REST que dependa dessa informação de ligação deve atualizadar para suportar os novos nomes.

Fonte: GeoServer Documentation

Categories: OSGeo Planet

Paul Ramsey: Esri and Winning

OSGeo Planet - Mon, 2018-11-26 13:00

How much winning is enough? Have you been winning so much that you’re tired of winning now?

I ask because last month I gave a talk (PDF Download) to the regional GIS association in Manitoba, about open source and open data. My talk included a few points that warned about the downsides of being beholden to a single software vendor, and it included some information about the software available in the open source geospatial ecosystem.

MGUG Keynote 2018

In a testament to the full-spectrum dominance of Esri, the audience was almost entirely made up of Esri customers. The event was sponsored by Esri. Esri had a table at the back of the room. Before giving my talk, I made a little joke about the fact that my talk would in fact include some digs at Esri (though I left the most pointed ones on the cutting room floor).

People seemed to like the talk. They laughed at my jokes. They nodded in the right places.

Among the points I made:

  • Single-vendor dominance in our field is narrowing the understanding of “what is possible” amongst practitioners in that ecosystem.
  • Maintaining a single-vendor policy dramatically reduces negotiating power with that vendor.
  • Maintaining a single-vendor policy progressively de-skills your staff, as they become dependant on a single set of tooling.
  • Practitioners have higher market value when they learn more than just the tools of one vendor, so self-interest dictates learning tools outside the single-vendor ecosystem.
  • Point’n’click GIS tools from Esri have widened access to GIS, which is a good thing, but driven down the market value of practitioners who limit themselves to those tools.

None of these points is unique to Esri – they are true of any situation where a single tool has driven competitors off the field, whether it be Adobe graphics tools or Autodesk CAD tools or Microsoft office automation tools.

Nor are any of these points indicative of any sort of ill will or malign intent on the part of Esri – they are just the systemic effects of market dominance. It is not contingent on Esri to change their behaviour or limit their success; it’s contingent on practitioners and managers to recognize the negative aspects of the situation and react accordingly.

And yet.

Esri and Winning

Despite the fact that almost all the people in the room were already their customers, that no new business would be endangered by my message, that all the students would still be taught their tools, that all the employers would still include them in job requirements, that people would continue to use the very words they choose to describe basic functions of our profession …

Despite all that, the Esri representative still went to the president of the association, complained to her about the content of my talk, and asked her to ensure that nothing I would say in my afternoon technical talk would be objectionable to himself. (In the event, I made some nasty jokes about Oracle, nobody complained.)

For some of these people, no amount of winning is enough, no position of dominance is safe, no amount of market leverage is sufficient.

It’s sad and it’s dangerous.

I was reminded of this last week, meeting an old friend in Australia and learning that he’d been blackballed out of a job for recommending software that wasn’t Esri software. Esri took away his livelihood for insufficient fealty.

This is the danger of dominance.

When the local Esri rep has a better relationship with your boss than you do, do you advocate for using alternative tools? You could be limiting or even potentially jeapardizing your career.

When Esri has locked up the local geospatial software market, do you bid an RFP with an alternative open tool set? You could lose your Esri partnership agreement and with it your ability to bid any other local contracts. Esri will make your situation clear to you.

This is the danger of dominance.

A market with only one vendor is not a market. There’s a name for it, and there’s laws against it. And yet, our profession glories in it. We celebrate “GIS day”, a marketing creation of our dominant vendor. Our publicly funded colleges and universities teach whole curricula using only Esri tools.

And we, as a profession, do not protest. We smile and nod. We accept our “free” or “discounted” trainings from Esri (comes with our site license!) and our “free” or “discounted” tickets to the Esri user conference. If we are particularly oblivious, we wonder why those open source folks never come around to market their tools to us.

We have met the enemy, and he is us.

We have met the enemy and he is us

Categories: OSGeo Planet

Nyall Dawson: Thoughts on “FOSS4G/SOTM Oceania 2018”, and the PyQGIS API improvements which it caused

OSGeo Planet - Sun, 2018-11-25 00:05

Last week the first official “FOSS4G/SOTM Oceania” conference was held at Melbourne University. This was a fantastic event, and there’s simply no way I can extend sufficient thanks to all the organisers and volunteers who put this event together. They did a brilliant job, and their efforts are even more impressive considering it was the inaugural event!

Upfront — this is not a recap of the conference (I’m sure someone else is working on a much more detailed write up of the event!), just some musings I’ve had following my experiences assisting Nathan Woodrow deliver an introductory Python for QGIS workshop he put together for the conference. In short, we both found that delivering this workshop to a group of PyQGIS newcomers was a great way for us to identify “pain points” in the PyQGIS API and areas where we need to improve. The good news is that as a direct result of the experiences during this workshop the API has been improved and streamlined! Let’s explore how:

Part of Nathan’s workshop (notes are available here) focused on a hands-on example of creating a custom QGIS “Processing” script. I’ve found that preparing workshops is guaranteed to expose a bunch of rare and tricky software bugs, and this was no exception! Unfortunately the workshop was scheduled just before the QGIS 3.4.2 patch release which fixed these bugs, but at least they’re fixed now and we can move on…

The bulk of Nathan’s example algorithm is contained within the following block (where “distance” is the length of line segments we want to chop our features up into):

for input_feature in enumerate(features): geom = feature.geometry().constGet() if isinstance(geom, QgsLineString): continue first_part = geom.geometryN(0) start = 0 end = distance length = first_part.length() while start < length: new_geom = first_part.curveSubstring(start,end) output_feature = input_feature output_feature.setGeometry(QgsGeometry(new_geom)) sink.addFeature(output_feature) start += distance end += distance

There’s a lot here, but really the guts of this algorithm breaks down to one line:

new_geom = first_part.curveSubstring(start,end)

Basically, a new geometry is created for each trimmed section in the output layer by calling the “curveSubstring” method on the input geometry and passing it a start and end distance along the input line. This returns the portion of that input LineString (or CircularString, or CompoundCurve) between those distances. The PyQGIS API nicely hides the details here – you can safely call this one method and be confident that regardless of the input geometry type the result will be correct.

Unfortunately, while calling the “curveSubstring” method is elegant, all the code surrounding this call is not so elegant. As a (mostly) full-time QGIS developer myself, I tend to look over oddities in the API. It’s easy to justify ugly API as just “how it’s always been”, and over time it’s natural to develop a type of blind spot to these issues.

Let’s start with the first ugly part of this code:

geom = input_feature.geometry().constGet() if isinstance(geom, QgsLineString): continue first_part = geom.geometryN(0) # chop first_part into sections of desired length ...

This is rather… confusing… logic to follow. Here the script is fetching the geometry of the input feature, checking if it’s a LineString, and if it IS, then it skips that feature and continues to the next. Wait… what? It’s skipping features with LineString geometries?

Well, yes. The algorithm was written specifically for one workshop, which was using a MultiLineString layer as the demo layer. The script takes a huge shortcut here and says “if the input feature isn’t a MultiLineString, ignore it — we only know how to deal with multi-part geometries”. Immediately following this logic there’s a call to geometryN( 0 ), which returns just the first part of the MultiLineString geometry.

There’s two issues here — one is that the script just plain won’t work for LineString inputs, and the second is that it ignores everything BUT the first part in the geometry. While it would be possible to fix the script and add a check for the input geometry type, put in logic to loop over all the parts of a multi-part input, etc, that’s instantly going to add a LOT of complexity or duplicate code here.

Fortunately, this was the perfect excuse to improve the PyQGIS API itself so that this kind of operation is simpler in future! Nathan and I had a debrief/brainstorm after the workshop, and as a result a new “parts iterator” has been implemented and merged to QGIS master. It’ll be available from version 3.6 on. Using the new iterator, we can simplify the script:

geom = input_feature.geometry() for part in geom.parts(): # chop part into sections of desired length ...

Win! This is simultaneously more readable, more Pythonic, and automatically works for both LineString and MultiLineString inputs (and in the case of MultiLineStrings, we now correctly handle all parts).

Here’s another pain-point. Looking at the block:

new_geom = part.curveSubstring(start,end) output_feature = input_feature output_feature.setGeometry(QgsGeometry(new_geom))

At first glance this looks reasonable – we use curveSubstring to get the portion of the curve, then make a copy of the input_feature as output_feature (this ensures that the features output by the algorithm maintain all the attributes from the input features), and finally set the geometry of the output_feature to be the newly calculated curve portion. The ugliness here comes in this line:

output_feature.setGeometry(QgsGeometry(new_geom))

What’s that extra QgsGeometry(…) call doing here? Without getting too sidetracked into the QGIS geometry API internals, QgsFeature.setGeometry requires a QgsGeometry argument, not the QgsAbstractGeometry subclass which is returned by curveSubstring.

This is a prime example of a “paper-cut” style issue in the PyQGIS API. Experienced developers know and understand the reasons behind this, but for newcomers to PyQGIS, it’s an obscure complexity. Fortunately the solution here was simple — and after the workshop Nathan and I added a new overload to QgsFeature.setGeometry which accepts a QgsAbstractGeometry argument. So in QGIS 3.6 this line can be simplified to:

output_feature.setGeometry(new_geom)

Or, if you wanted to make things more concise, you could put the curveSubstring call directly in here:

output_feature = input_feature output_feature.setGeometry(part.curveSubstring(start,end))

Let’s have a look at the simplified script for QGIS 3.6:

for input_feature in enumerate(features): geom = feature.geometry() for part in geom.parts(): start = 0 end = distance length = part.length() while start < length: output_feature = input_feature output_feature.setGeometry(part.curveSubstring(start,end)) sink.addFeature(output_feature) start += distance end += distance

This is MUCH nicer, and will be much easier to explain in the next workshop! The good news is that Nathan has more niceness on the way which will further improve the process of writing QGIS Processing script algorithms. You can see some early prototypes of this work here:

I couldn’t be at the community day but managed to knock out some of the new API on the plane on the way home. **API subject to change

Categories: OSGeo Planet

PostGIS Development: PostGIS 2.3.8, 2.4.6

OSGeo Planet - Sat, 2018-11-24 00:00

The PostGIS development team is pleased to provide bug fix 2.3.8 and 2.4.6 for the 2.3 and 2.4 stable branches.

Continue Reading by clicking title hyperlink ..
Categories: OSGeo Planet

Fernando Quadro: Editor de estilos do GeoServer agora com modo “Tela cheia”

OSGeo Planet - Fri, 2018-11-23 10:30

A versão 2.14.1 do GeoServer veio com uma novidade: A página do editor de estilo agora tem um botão de “Tela cheia” no canto superior direito da janela:

Se pressionado, o editor e a visualização serão exibidos lado a lado e usarão todo o espaço da janela do navegador:

A cada atualização a seção de estilos do GeoServer tem melhorado. A equipe de desenvolvimento já a algum tempo tem dado uma atenção especial a essa seção, e já está muito mais fácil lidar com a edição e visualização dos estilo no GeoServer.

Categories: OSGeo Planet

XYCarto: Wellington Elevations: Interpolating the Bathymetry

OSGeo Planet - Fri, 2018-11-23 04:39

It is important to note something from the very beginning. The interpolated bathymetry developed in this project does not reflect the actual bathymetry of the Wellington Harbour. It is my best guess based on the tools I had and the data I worked with. Furthermore, this interpolation is NOT the official product of any institution in New Zealand. It is an interpolation created by me only for the purposes of visualization.

welly_harbour-colour-and-aerial_FULLVIEW

Part of the goal when visualizing the Wellington landscape was to incorporate a better idea about what may be happening below the surface of the harbor. Various bathymetric scans in the past have gathered much of the information and institutions like NIWA have done the work visualizing that data. As for myself, I did not have access to those bathymetries; however, I did have a sounding point data set to work with, so I set about interpolating those points.

The data set, in CSV format, was over a million points; too dense for a single interpolation. I worked out a basic plan for the interpolation based on splitting the points into a grid, interpolate the smaller bits, then reassemble the grid tiles into a uniform bathymetry.

Conversion from CSV to shp
Using the open option (-oo) switch, OGR will convert CSV to shp seamlessly

ogr2ogr -s_srs EPSG:4167 -t_srs EPSG:4167 -oo X_POSSIBLE_NAMES=$xname* -oo Y_POSSIBLE_NAMES=$yname* -f "ESRI Shapefile" $outputshapepath/$basenme.shp $i

View the full script here: https://github.com/IReese/wellyvation/blob/master/utils/convertcsv.sh

Gridding the Shapefile
With the shapefile in place, I next needed to break it into smaller pieces for interpolation. For now, I create the grid by hand in QGIS using the ‘Create Grid’ function. This is found under Vector>Reasearch Tools>Create Grid. Determining a grid size that works best for the interpolation is a bit of trial and error. You want the largest size your interpolation can manage without crashing. Using the grid tool from QGIS in very convenient, in that it creates an attribute table of the xmin, xmax, ymin, ymax corrodinates for each tile in the grid. These attributes become very helpful during the interpolation process.

Interpolating the Points
I switched things up in the interpolation methods this time and tried out SAGA GIS. I have been looking for a while now for a fast and efficient method of interpolation that I could easily build into a scripted process. SAGA seemed like a good tool for this. The only drawback, I had a very hard time finding examples online about how to use this tool. My work around to was to test the tool in QGIS first. I noticed when the command would run, QGIS saved the last command in a log file. I found that log, copied out the command line function, and began to build my SAGA command for my script from there.

Here is look at the command I used:

saga_cmd grid_spline "Multilevel B-Spline Interpolation" -TARGET_DEFINITION 0 -SHAPES "$inputpoints" -FIELD "depth" -METHOD 0 -EPSILON 0.0001 -TARGET_USER_XMIN $xmin -TARGET_USER_XMAX $xmax -TARGET_USER_YMIN $ymin -TARGET_USER_YMAX $ymax -TARGET_USER_SIZE $reso -TARGET_USER_FITS 0 -TARGET_OUT_GRID "$rasteroutput/sdat/spline_${i}"

The full script can be found here: https://github.com/IReese/wellyvation/blob/master/utils/welly_interpolation_process.sh

I tested a number of methods and landed on ‘grid_spline’ as producing the best results for the project. It was useful because it did a smooth interpolation across the large ‘nodata’ spaces.

Once the initial interpolation was complete, I needed to convert the output to GeoTIFF since SAGA exports in an .sdat format. Easy enough since GDAL_TRANSLATE recognizes the .sdat format. I then did my standard prepping and formatting for visualization:

gdal_translate "$iupput_sdat/IDW_${i}.sdat" "$output_tif/IDW_${i}.tif" gdaldem hillshade -multidirectional -compute_edges "$output_tif/IDW_${i}.tif" "$ouput_hs/IDW_${i}.tif" gdaladdo -ro "$output_tif/IDW_${i}.tif" 2 4 8 16 32 64 128 gdaladdo -ro "$ouput_hs/IDW_${i}.tif"2 4 8 16 32 64 128

Here is look at the interpolated harbour bathymetry, hillshaded, with Wellington 1m DEM hillshade added over top
welly_harbour_bw_all

And here is a look at the same bathy hillshade with coloring
welly_harbour_bw-and-aerial

Visualizing the Bathymetry
With the bathymetry, complete it was simply a matter of building it into the existing visualization I built for the Wellington Region. Learn more about the project here. The visualization was four steps:

Hillshade
addedbathy_bathyonlypng
Color
addedbathy_bathyonly_withcolor
Aerial Imagery
addedbathy_bathyonly_withcoloraerial
Then merge the models together
addedbathy_final

Easy as, eh? Let me know what you think!

Categories: OSGeo Planet

XYCarto: Building the Wellington Model with 1m DEM and DSM

OSGeo Planet - Thu, 2018-11-22 22:21

As interest in LiDAR derived elevation increases, so grows the interest in the capabilities. LiDAR derived elevation data has been great for my visualization game and in helping me communicate the story out about what LiDAR can do. It all starts with a picture to get the imagination going.

wellyvation

The Wellington model derived for this project is part of an ongoing project to help increase the exposure of the Wellington 1m DEM/DSM elevation data derived from LiDAR. Step one for me is getting a working model built in QGIS, capturing still images, and increasing interest in the data.

I’ve talked about the processing of the elevation data for Wellington visualizations in the past, here, so for this post I’m only focusing on the blending of the data sets in building the model. This project is a good model since it encompasses a number of subtle techniques to get the model to stand out. This post is one of a two part series; the second post discusses the techniques used to derive and visualize the bathymetry for the surrounding harbor.

Let’s start with the base, Aerial Imagery.
wellyhabour_aerialonly

Blended with a hillshade
wellyhabour_aerial_withHS

DSM added for texture and context
wellyhabour_aerial_withDSMHS

Slope added to define some edges
wellyhabour_aerial_withDSMDEMSLOPEHS

Some darker shading added to the bathymetry to frame the elevation data
wellyhabour_aerial_withDSMDEMSLOPEHS_darkenframe

And finally some added bathymetry to lighten the edges at the shoreline enhancing the frame a bit more.
wellyhabour_aerial_withDSMDEMSLOPEHS_edgeframe

In the end there is some post-processing in Photoshop to lighten up the image. Honestly, this could have been done in QGIS, but I was being lazy. For the images produced, there was no need to retain the georeferencing, and when that is the case, I rely on Photoshop for color and light balancing.

The greatest difficultly in this project so far has been trying to create a universal model for the data set. I’m finding that as I visualize different regions using this model, I need to adjust the hillshading quite significantly to draw out different features. Take a look at the images here. It is the same model, but with the noticeably different gradients used in the hillshades. The techniques used for the images in this post worked well for the urban region shown, but fall apart as you move further out into the more mountainous regions. Much of the blending is too harsh and turns the mountains into a black muddled mess. I am almost there, but like any project, it takes a good bit of subtle tweaking of the blending to get a universal image to work.

The entire base mapping work is completed in QGIS. The elevation data was processed using GDAL and the bathymetric interpolations were produced SAGA GIS. There are no color palettes for this project. The aerial imagery does all the work in that department.

Base data can be found here:
DEM: https://data.linz.govt.nz/layer/53621-wellington-lidar-1m-dem-2013/
DSM: https://data.linz.govt.nz/layer/53592-wellington-lidar-1m-dsm-2013/
Aerial Imagery: https://data.linz.govt.nz/layer/51870-wellington-03m-rural-aerial-photos-2012-2013/

The next post covers the development of the bathymetry for the surrounding harbor. Thanks for having a look and let me know what you think.

Categories: OSGeo Planet
Syndicate content