Ingeniería Mundial
  • Register

Sólo para ingenieros (361): Drones, volcanes y la "informatización" de la Tierra (2)

Estimados colegas y amigos,


¿Que relacion podria haber entre drones, volcanes y la informatizacion de la Tierra?  Un estimado colega y amigo nos envia un articulo escrito por Adam Fish y publicado, el pasado 14 de diciembre de 2017, en “The Conversation”, en el que hace, segun su buen entender, una relacion entre estas tres cosas, concluyendo con una intrigante alocucion filosofica. Veamos de que se trata…La erupcion del volcan Agung en Bali, Indonesia, ha sido devastadora, especialmente para las 55,000 personas locales que han tenido que abandonar sus hogares y mudarse a refugios. Tambien ha causado estragos en los vuelos dentro y fuera de la isla, dejando a la gente varada mientras los expertos intentaban averiguar que haria el volcan a continuacion.
 Pero este ha sido un momento fascinante para academicos como yo que investigan el uso de drones tanto en justicia social, activismo ambiental como en la preparacion para las crisis. El uso de drones en este contexto es solo el ultimo ejemplo de la "informatizacion de la naturaleza" y plantea preguntas sobre como la realidad se esta construyendo cada vez mas por “software” (programas para manejo de computadoras). La entrega de drones de Amazon se esta extendiendo e incrementando en el Reino Unido, mientras que en Indonesia la gente usa drones para monitorear poblaciones de orangutanes, mapear el crecimiento y la expansion de plantaciones de aceite de palma y recopilar informacion que podria ayudarnos a predecir cuando volcanes como Agung podria estallar de nuevo con un impacto devastador. La segunda mision implico volar un sensor de dioxido de carbono y dioxido de azufre a través de la fumarola del volcán. Un aumento en estos gases puede decirnos si está en ciernes una erupción. Y en efecto, se detectó un alto grado de dióxido de carbono y esa información dio pauta al gobierno para elevar la alerta de amenaza al más alto nivel.En la proxima y tercera mision, usaremos drones para ver si todavia hay alguien en la zona de exclusion para que puedan ser encontrados y rescatados. Lo que es interesante para mí como antropologo es como los cientificos e ingenieros usan las tecnologias para comprender mejor los procesos distantes en la atmosfera y debajo de la Tierra. El volar un dron a 3,000 metros hasta la cima del volcan en erupcion es una tarea muy dificil. Varios otros grupos ya lo han intentado y han perdido algunos drones muy costosos – sacrificios para lo que los hindues balineses consideran una montana sagrada. Mas filosoficamente, estoy interesado en comprender mejor las implicaciones de tener sistemas de sensores como drones volando en el aire, debajo del mar o en crateres volcanicos, basicamente en todas partes. Estas herramientas pueden ayudarnos a evacuar a las personas antes de una crisis, pero tambien implica transformar las senales organicas en codigos de computadora.
 
Durante mucho tiempo hemos interpretado la naturaleza a traves de tecnologias que aumentan nuestros sentidos, especialmente la vista. Los microscopios, telescopios y binoculares han sido excelentes recursos para la quimica, la astronomia y la biologia.
Internet de la naturalezaPero la sensorizacion de los elementos es algo diferente. Esto se ha llamado computarizacion de la Tierra. Hemos escuchado mucho sobre Internet de las cosas, pero esta es la internet de la naturaleza. Este es el estado de vigilancia convertido en biologia. La proliferacion actual de drones es el último paso en el cableado de todo en el planeta. En este caso, el aire en si, para comprender mejor las entranas de un volcan. Se espera que estos sensores voladores le den a los vulcanologos lo que el antropologo Stephen Helmreich llamo abduccion, o un "argumento del futuro" predictivo y profetico.Pero los drones, los sensores y el software que utilizamos proporcionan una vision del mundo particular y parcial. Si miramos al presente desde el futuro, ¿cual sera el impacto del aumento de la dataificacion de la naturaleza: mejores rendimiento de los cultivos, una mejor preparacion para  emergencias, el monitoreo de especies en peligro? ¿O esta cuantificacion de los  elementos dara como resultado un vasallaje de la naturaleza a la logica de la computadora? Hay algo que no se comprende del todo – o mas aun, inquietantemente incomprensible – sobre como los robots voladores y los autos autonomos equipados con sistemas de teledeteccion filtran el mundo a traves de algoritmos de “Big Data” capaces de generar y responder a su propia inteligencia artificial.Estos entidades no humanas reaccionan al mundo no como procesos ecologicos, sociales o geologicos, sino como funciones y conjuntos de caracteristicas en las bases de datos. Me preocupa lo que esta vision del software de la naturaleza excluiara, y mientras rehacen el mundo en su imagen de base de datos, cuales serian las implicaciones de esas exclusiones para la sostenibilidad planetaria y la autonomia humana.ddasa  
En este mundo futuro, puede haber una diferencia menor entre la ingenieria hacia la naturaleza y la ingenieria de la naturaleza.Fuente:
https://theconversation.com/drones-volcanoes-and-the-computerisation-of-the-earth-88674

__________Agradezco las contribuciones y opiniones enviadas.No. de ingenieros en la lista de distribucion: 1,155No. de envio: 361Bienvenidos comentarios sobre los envios.Este correo no tiene acentos.Les envio un fuerte abrazo.Arnoldo
____________
Comentarios:
____________
Solo para Ingenieros (360): Nuevo metodo para analizar las características del grano de maiz...
____________

No hubo comentarios
____________
Otras Ligas:Liga al Portal de la Academia Panamericana de Ingeniería - Ciencia y Tecnología – 'Sólo para ingenieros': http://www.academiapanamericanaingenieria.org/

En la columna ‘Ingenieria para todos’ del rotativo La Union de Morelos se publico el lunes 11 de diciembre de 2017: ‘Magos en el manejo hidraulico: la experiencia holandesa en materia de inundaciones es un gran negocio de exportacion...’.

Effective Use of Geospatial Big Data

Server Solutions Hold the Key


The heart of any geospatial analysis system, regardless of its location or configuration, is increasingly becoming the server. All face a similar challenge, whether the system is in the ‘cloud’, a secure data centre or on a single machine running in an office. This challenge is primarily the ability to deal with the ever-increasing quantities and variety of data the world now produces at an unprecedented rate. For mission-critical systems, purposely designed software is required, tested in the most demanding environments. Try doing it cheaper and you only end up wasting money.

Both commercial and government organisations recognise the enormous fiscal, operational and social benefits of utilising their geospatial data for analysis. However, because the volume, variety and velocity of the data is continually expanding, it generates increasing problems for those tasked with the storage, analysis and serving of the information within an organisation. In addition, companies are experiencing challenges related to the many new sensor platforms that are emerging, many of which did not exist a few years ago and which must be incorporated into future GIS applications. 

‘Maps gone digital’ Mindset Disables

It is therefore vital that when an organisation is considering a new system, it has the ability to deal with the ever rising amount and sorts of data, whether this data is coming from georeferenced social media posts, high-resolution satellite imagery or smart energy metres.  Legacy technology based on the ‘maps gone digital’ mindset is unable to provide the visual quality, speed and accuracy necessary to run on platforms as diverse as Windows, Linux, Amazon AWS, a Docker Container or even be deployable from a USB stick if needed.

Organisations today need the industry standard system foundations for truly interactive solutions capable of analysing and visualising video, photographic, unstructured text and many forms of legacy data real-time and in a secure environment. Equally, systems should ideally be based on proven technology, and be extensively tested within the demanding mission critical defence and aerospace sectors - the most extreme and demanding of all software operational environments. In essence, to deliver performance and accuracy without compromise for the analysis of geospatial Big Data and the location information that flows from the Internet of Things, purposely designed software is required.

Analysing social media by keyword (what) to map where and when the communication happened.
Analysing social media by keyword (what) to map where and when the communication happened.

For those who only have experience of legacy systems, a single unified and secure future proof server solution for data publication workflow and geospatial data management is most in demand. This is because these kinds of systems enable users to manage their data intelligently, store and process a multitude of data formats and feed data into numerous applications with varying levels of security. Features including powerful automatic cataloging as well as quick and easy data publishing are also in demand. This allows users to design, portray, process and set up advanced 3D maps in a few simple clicks.

These are requirements and demands that we have seen at Luciad over the past few years, both through our work with some of the most demanding Big Data users such as NATO and EUROCONTROL and through our work with commercial organisations such as Oracle and Engie Ineo. So, what does a geospatial server solution capable of dealing with these challenges and satisfying these demands actually require? 

Spatial Server Solutions Requirements

First, they should be able to connect directly to a multitude of geographic data formats such as IBM DB2, OGC GeoPackage, Oracle Spatial, SAP HANA and Microsoft SQL Server. This is an essential part of ensuring that they can cope with the explosion of formats that the rise of Big Data has prompted. It is also essential that these server systems move away from the Extract – Transform - Load paradigm and avoid converting the data into a fixed high cost proprietary format before analysis. This retention of the original format is recognised as the only method that ensures both high speed and accuracy of processing when dealing with the growing, dynamic datasets that now are the norm.

A view of the data management screen in LuciadFusion.
A view of the data management screen in LuciadFusion.

Second, in situations where a user has a large quantity of high-volume – high-quality geospatial data that needs to be published to an OGC standard, this must be achieved with a few clicks. This avoids complex, risky and time consuming pre-processing of the data or custom software code. The same ease of use is required with other common formats like ECDIS Maritime data, Shape, KML, and GeoTiff formats among many others. It is vital that this data can be accessed and represented in any coordinate reference system (geodetic, geocentric, topocentric, grid) and in any projection while performing advanced geodetic calculations, transformations and ortho-rectification. This is especially crucial of datasets such as weather and satellite information, which includes detailed temporal references and high-resolution video files that need to be visualised in 3D to include ground elevation data and moving objects.

Help

The demands of users, however, do go beyond the server solution itself. At the heart of any decision regarding a major GIS IT or technology purchase should be an understanding of the support, training and help that will be required, plus a firm written commitment from the supplier regarding backwards compatibility. The user should also be aware of and involved with the development roadmap; a roadmap that should be driven by the needs and wishes of both the end user and the developer community.

db0fe6f164c1266372947ef30b1cb2771cd6efbd
A view of Los Angeles, with thematically styled buildings, using data published in LuciadFusion.

This is something which is a recognised and concerning weakness of Open Source software, most of which offer near non-existent help and training. What training there is may well be coming from individuals with no relevant qualifications or skills and who are often all located in one global time zone, which delays responses. Where lives matter, such as in mission critical environments, Open Source is a risk. It is vital that advanced in-person and online training is available from those individuals who have an intimate close working knowledge of the software and a relationship with the original coding team backed up by detailed manuals and code examples. Training and support should only be from subject matter experts who understand the time critical commercial challenges of business. They should also be aware of the planned development pathways for the software and the possibilities for custom code if needed for one-off projects with unique requirements. 

Technology Advances

Giving users and developers the tools to extend the solution that they have is also essential. As has been mentioned, new data formats may be introduced, and requirements may change as technology advances. This necessitates putting together a user guide that delivers clear explanations and descriptions of best practices, along with API references. They offer a detailed description of all interfaces and classes to ensure a new user is able to seamlessly add new data formats and sources as needed for a project. As an example, the development of a Common Operating Picture will require a combination of imagery, military symbology, NVG files,radar feeds and always changing types of other data, in one system, in near real-time and with the minimum of delay.

GIS can help build the future of a community, assist in the security of a country, form the basis of a mission planning system and can open new avenues of revenue and profit for a company. However, this can only be realised if the right system is specified and purchased. Too many organisations have wasted time, money and resources attempting to save money or cut back on the initial assessment requirements. Research shows that systems proven in the crucible of deployed operations have the robustness required to deliver in other markets.

One partner of Luciad is Sc2 Corp. Conversations on social media reveal valuable information for decision-makers. Leaders have a new tool to capture and analyse posts and tweets -and keep what they find private. Sc2 Corp created the Human Terrain Analysis System. It is an application to analyse social media communication that uses Luciad technology to map where conversations occur and plot when they happened. The system also analyses what people are saying. All of this takes place in an appliance purchased by the user, not in the cloud. To monitor social media, Sc2 Corp needed to be able to analyse data in 40 languages and examine linguistics of the posts and tweets. They also wanted to be able to map and plot the times of conversations and partnered with IBM and Luciad to design the system. Users can choose to see mapped data two or three dimensionally.  A concert promoter for example, can see where people are talking about particular musicians. Or retailers can gauge customer interest in particular products in their neighbourhood, etc. The application can analyse the same amount of data in half a day that it would take 20 people to analyse in a full day, and is therefore becoming popular among governments, insurance companies, investment agencies and marketing groups.

www.gim-international.com - Viernes 10 de Noviembre del 2017

Overcoming the Bottlenecks of Today’s Dense Point Clouds

GIM International interviews Murat Arikan, NUBIGON

 

NUBIGON is a start-up company with offices in Turkey and Austria that has developed powerful reality capture software. The company’s solution visualises Lidar and photogrammetric point clouds in real time and in full HD, while retaining the accurate precision that is needed by many professionals who are working with point clouds. 'GIM International' decided to interview Murat Arikan, the company’s ambitious founder and lead software developer, to find out more.

NUBIGON specialises in software to visualise 3D point clouds acquired by Lidar or photogrammetry. What are the main bottlenecks in the visualisation of today’s dense point clouds?

The most prominent bottleneck in the visualisation of today’s dense point clouds is data size. Today, just one scan position generated by a decent 3D laser scanner is almost 1GB. In contemporary projects, scanned environments generally have huge dimensions (airport terminals, factories, etc.) and acquiring these spaces properly can require tens or even hundreds of scan positions so that almost always results in ‘big data’. Most of the 3D point cloud software currently on the market cannot handle that amount of data. The second most important bottleneck is the quality of the visualisation, which is directly connected to the data size issue. In the era of high-quality rendering in games and movies, the visualisation quality of point clouds is not sufficient. Last but not least, another bottleneck is the interaction with 3D point clouds. Most of the time, users want to work in CAD software environments, but point cloud software generally offers limited integration with them. Although there are plug-ins for CAD software for easier integration, users still face a clash of different file types and hard-to-manage workflows. At NUBIGON, we overcome these bottlenecks with state-of-the-art algorithms and tools.

In which application domains is your software mainly used at present, and what further potential uses does it have?

At present, our focus is on archaeology and architecture. Our motto is “NUBIGON is the software that makes life easier for architects”. Most current methods for architects are outdated (tape measures, rangefinders, orthophotos and so on) and offer room for improvement. Our users take measurements, create floor plans and make architectural 3D CAD drawings directly on the point cloud and export the results to the CAD software of their choice. Hence, they don’t waste time converting 3D point clouds to other formats (e.g. mesh, orthophoto). But this doesn’t mean that NUBIGON is not capable of working with meshes or cannot export orthophotos. We offer our customers a new and streamlined workflow to capture reality. NUBIGON’s tools are made by designers, for designers. At NUBIGON, we always put the user experience first.

The density of point clouds is steadily increasing. Meanwhile, researchers are working hard on creating smart points clouds in which semantic information is attached to each and every point. How do you plan to ensure your software keeps pace with these rapid developments?

Actually, we’re very happy that researchers are constantly enriching points clouds. Our first goal was to turn point clouds into a usable and visually appealing medium; all our research efforts were focused on that for years. Now that we’ve achieved that goal, we have launched two new R&D projects this year. One of them is about searching for shapes in point clouds so that walls, doors and roofs can be recognised automatically. Our other R&D project is almost coming to an end; it is about streaming NUBIGON to all devices (including smartphones, tablets and Apple computers). In terms of the application for mobile devices, we’re not talking about a simple mobile app here. Right now we are in the prototype stage and NUBIGON is capable of streaming its full functionality to any device. Besides that, we’re enabling users to work concurrently on the same scene.

What are the basic components of your business model?

We are following the lean start-up model and we’ve built our business model entirely around that idea. We offer a fixed-price software-as-a-service (SaaS) solution. We created our business to solve our customers’ problems, not to overwhelm them with complex licensing models. We think that the idea of not wasting anyone’s time shouldn’t only apply for our investors or business partners but also for our customers; when we talk about “making life easier”, we mean it!

NUBIGON is a spin-off from your PhD research activities at the Vienna University of Technology inAustria. Why did you decide to expand your knowledge by establishing a company?

My long-time friend Can Turgay helped me to realise that the research I conducted could be exploited commercially with the right approach. In my past career as a researcher, I wondered whether my research would produce usable results in the real world or only patents gathering dust on the shelf. Thanks to my friend’s support, the idea of applying my research to real-world problems influenced me and I decided to pursue this commercial idea with him in late 2014.

Like Racurs, a photogrammetric company based in Moscow, Russia, NUBIGON was established by four people. Do you think four could be a magic number for future success?

I don’t think that there is a magic number for future success. A team of hard-working, like-minded and skilled people is the real key to success. Our team consists of individuals who are experts in their own profession, and every member of the team brings something different to the table. It’s the combination of our skills that advances NUBIGON in business.

Your company is based on a ‘technopark’ in Istanbul. What are the advantages of exploring a technology-driven business under the wings of a technopark?

Yes, one of our offices is located on a technopark in Istanbul. In general, technoparks are great places to meet like-minded entrepreneurs, and networking within a technopark especially helps you to solve lots of common problems. It has been really beneficial to be in touch with companies that have followed a similar path to ours, as some of the problems we were facing during the development of NUBIGON were common among start-ups. Also, the technopark management's support for the commercialisation of our ideas proved a unique benefit.

What are your ambitions, and where do you want NUBIGON to be in five years’ time?

Right now we are operating in two countries, but our customers are all over the world. To reach them properly, we need to expand into different continents. This autumn, we’ll conduct a feasibility study for expanding into the US market. Personally, I want to see that my research helps people in their daily lives. If we, as a company, can have a little impact on shaping a world in which industry professionals use point clouds to enhance their capabilities, I’ll say “job well done.” In five years from now, I hope NUBIGON will be a point-cloud software pioneer.

Do you have any advice for young researchers who want to set up a business of their own?

First and foremost, the most important issue is that they need to understand that they cannot do everything on their own. No matter how well-equipped they are, a successful business needs a great team. They need partners they can trust, and to use their talents and time effectively. Research, development, sales, management and marketing are all areas that require a very different perception and ability. For a company to succeed, all these aspects must be successfully managed. It’s not possible for just one person to have so many talents. Another recommendation for young researchers is that they need to set goals which can translate into a business. Only research that can be applied in practice creates good scenarios for start-ups. Their aim should always be to create a minimum viable product.

About Murat Arikan

Murat Arikan gained his MSc in mathematics from TU Wien in Vienna, Austria, in 2008. He is a doctoral candidate at the Institute of Computer Graphics and Algorithms of TU Wien. Throughout his doctoral research, he participated in numerous research projects and has co-authored five papers on the topic of reconstruction and visualisation. Thereafter, he formed a team that developed the point-cloud processing software NUBIGON, which provides commercial reality-capturing solutions in the fields of architecture, archaeology, engineering and construction. He is the lead software developer of NUBIGON. He resides in Austria.

www.gim-international.com - Viernes 10 de Noviembre del 2017

OBIA and Point Clouds

Airborne Lidar and Object-based Image Analysis


Object-based Image Analysis (OBIA) has been developed to improve the accuracy of conventional, pixel-based classification of multispectral images. Introduced around the year 2000 and implemented in various software packages such as eCognition, OBIA has been successfully applied for mapping land cover, forest and agricultural areas. Today, not only high-resolution multispectral images are available but increasingly also high-density 3D point clouds captured by airborne Lidar. Is OBIA also suited for the semi-automatic classification of Lidar point clouds? The author highlights promising prospects.

An Airborne Laser Scanner consists of various sensors. The laser ranger emits pulses to measure the distance from the sensor to where the pulse hits the surface of the Earth. To transfer the distances to X,Y,Z coordinates the pose of the sensor and its position have to be accurately measured using an inertial measurement unit (IMU) and a GNSS receiver onboard of the aircraft.  Often also imaging sensors such RGB, hyperspectral, thermal or multispectral cameras are onboard. Helicopters are used as carrier for narrow swath measurements at low altitudes. They can hover thus providing point cloud densities up to 200 points/m2 with high accuracy (Figure 1). Fixed-wing systems are suited for high altitudes, covering large areas and capturing point clouds with lower densities. Satellite-based systems are a special category and relatively rare.

Figure 1, Platforms and related coverage of Airborne Laser Systems.
Figure 1, Platforms and related coverage of Airborne Laser Systems.

Benefits

Wide area systems provide accurate DEMs for orthorectification and contour generation suited for crop hazard analysis, hydrologic modelling and flood plain mapping. These systems also provide information on the heights of tree stands, biomass, excavation volumes, and support other natural resources management tasks. Furthermore, they enable mapping of transportation and utility corridors and in urban areas the 3D models derived from the point clouds enable line-of-sight studies, viewshed analysis, and many more. Helicopters and UAVs are well suited for capturing transmission lines for determining thermal rating and height of canopy. Furthermore, the point clouds acquired from these platforms are beneficial for monitoring railways, highways, levees and pipelines. In addition to capturing linear objects, these platforms are suited for collecting points of areas with a limited extent. The main benefits making ALS point clouds a very interesting source of spatial data are (Vosselman & Maas, 2010):

  • Very high speed of data collection for large areas with each data point having information on 3D (X,Y,Z) position, intensity of the return and echo width in case of full-waveform digitisation
  • High coverage allowing – at a later stage – to identify features which may have initially been missed in the field while accurate spatial data can be easily collected
  • The elevation is measured directly by the sensor and not from image matching applied to the reflectance values of images which are highly sensitive to the types of object, humidity and other atmospheric conditions
  • Multiple returns per pulse are used as an invaluable source of information in vegetated areas and thus in many forestry applications. Multiple returns can also provide insight into the vertical structure and complexity of forests.

Added to this, compared to images, ALS systems can see through canopy as the pulses can penetrate small gaps in vegetation and other semi-transparent objects and thus can provide additional information on physical properties of the object.

Figure 2, ALS point cloud of the historical centre of Biberach and der Riß, Germany.
Figure 2, ALS point cloud of the historical centre of Biberach and der Riß, Germany.

OBIA

ALS collects a raw point cloud consisting of irregularly distributed 3D points. These points are geometric features but do not have a meaning per se since a point cloud does not represent structures of separable and clearly delineated objects—it is a group of points fixed in an internal or real-world coordinate system. The human eye can see patterns in such representations (Figure 2) but computers need processing to assign classes and provide meaning to groups of adjacent points. The classification of images involves assigning thematic classes to pixels. All pixels are of same size and same shape, and neighbouring pixels don’t know whether they belong to the same object.  Object-based image analysis (OBIA) segments an image by grouping pixels based on similarities in spectral or other properties. The basic assumption is that a segment forms an object or a part of an object. However, ‘over-segmentation’ is sometimes required to classify complex objects such as a rooftop consisting of plain chimneys and dormers.

Context is Key

When looking at Figure 2 one may recognise buildings. In his 1982 pioneering book <i>Vision A computational approach<i>, David Marr challenged scientists: “What does it mean to see? The plain man’s answer (and Aristotle’s, too) would be, to know what is where by looking. In other words, vision is the process of discovering from images what is present in the world, and where it is.” Similarly, OBIA aims to let computers ‘see’ beyond the plain pixels – what does this data represent within the real world? Context is the key. Advances in remote sensing technology in combination with higher spatial resolutions allow for more ‘intelligent’ image analysis including OBIA. According to Lang (2008, p.6) ‘intelligence’ includes in this context: (1) an advanced way of supervised delineation and categorisation of spatial units, (2) the way in which implicit knowledge or experience is integrated, and (3) the degree, in which the results contribute to an increase of knowledge and better understanding of complex scenes. So far, OBIA within geosciences has been used to partition satellite images into meaningful image-objects, and assessing their characteristics through spatial, spectral and temporal scale. Compared to pixels which have no direct counterpart in the real world these image-objects are more closely related to real-world objects. What we hopefully achieve are semantically interpretable segments. Such segmented imagery can be further processed by adding values to these objects or object candidates.  

Figure 3, Rasterised representations of ALS data, from left to right: raster cells containing heights; raster cells containing intensity values; initial objects after segmentation; modified and classified objects representing buildings.
Figure 3, Rasterised representations of ALS data, from left to right: raster cells containing heights; raster cells containing intensity values; initial objects after segmentation; modified and classified objects representing buildings.

OBIA on Point Clouds

Recently, researchers started to apply OBIA to point clouds. To be suited for OBIA, point clouds usually have to be converted from a 3D representation to a 2D representation in the form of a raster, or 2.5D, that is a raster with one height value added to each grid cell. Such representations are suitable for further analysis (Figure 3). The segments which are homogeneous in height and or intensity of the return are used as input for grouping and classification. ALS point clouds do not contain RGB values. The raw data consisting of height, number of single beam reflections and intensity of the return can be enriched with information stemming from other sources. OBIA exploits size, shape, position and relationships to other segments which improves the classification results. For example, segments with straight outlines indicate buildings or streets, while fuzzy and irregular outlines may indicate vegetation.  Therefore, OBIA enables an hierarchical multiple spatial scale approach, allowing to use characteristic nested scales of features for man-made or natural objects.

Figure 4, Flowchart of the developed approach.
Figure 4, Flowchart of the developed approach.

Example

We developed an OBIA approach for automatically detecting and outlining buildings using the Cognition Network Language (CNL), a modular programming language within eCognition. The approach, of which an overall flow diagram is shown in Figure 4, was tested on an ALS point cloud partly covering the historical city centre of Biberach and der Riß, Germany, provided by Trimble and collected on March 2012 using a Trimble Harrier 68i system. The point cloud covers an area of 2.5km2, consists of multiple returns with intensity values and has an average point density of 4.8 point/m2. The old town is characterised by older, tightly compacted houses, some with sharing walls. The accurate result of the OBIA building extraction approach is shown in Figure 5.

Concluding Remarks

OBIA does not only provide good results for classifying images but is also highly suited for automated building extraction from ALS point clouds in the form of 2D polygons representing roof outlines.

db0fe6f164c1266372947ef30b1cb2771cd6efbd
Figure 5, Detected buildings (blue) superimposed on the ALS intensity image.

www.gim-international.com - Viernes 10 de Noviembre del 2017

Photogrammetric UAV Software

What to Consider Before the Purchase

2cff1efd90288079fb9198eaea3aff9cf254f652The incredible diffusion of unmanned aerial vehicles (UAVs) has pushed many companies and research groups to implement dedicated software for the processing of data acquired by these devices. The number and the completeness of these software solutions have constantly increased with the aim to satisfy a growing and heterogeneous market. Depending on the scope of the UAV acquisitions, the experience and technical skills of the operator as well as the available budget, there are several affordable solutions already available on the market. The holistic software probably does not exist, but some features and options should be considered when approaching these instruments in order to find the optimal solution for one's needs.

The software should be able to upload and process images and videos acquired with different sensors. With this we include the RGB cameras that can be hosted in the payload, but also terrestrial and airborne sensors because UAV data is more often integrated in a unique image block with other acquisitions. When videos are acquired, the software should be able to optimally extract the best frames (also in terms of image quality) allowing the photogrammetric process. Several multi (and hyper) -spectral as well as thermal cameras are nowadays available on the market. Their adoption in some applications makes use of their efficient processing in the software, though their resolution and geometry are suboptimal for purely photogrammetric processing. Their easy integration with RGB images can also be very valuable in several applications.

Image processing

The image processing is normally performed according to “modern” and automated photogrammetry and computer vision algorithms. Depending on the user expertise, turn-key solutions (with reduced options) can be preferred to more technical and rigorous approaches. Default parameters can give quite good solutions in most practical cases. However, the possibility to modify the parameters and their weights can be preferable to improve the results, especially in the most challenging cases. Tunable parameters can be beneficial both in the image orientation and dense point cloud generation phases. Even if the automation is a priority, small tools for adding tie-points in critical image orientations or removing mismatches in the generated dense point clouds can also be important for the effective processing. The same is valid for true-orthoimages and meshes generated from these datasets: simple manual or semi-automated editing tools to “polish” the results allow the faster delivery of these products without the need for external software like image and 3D model editors. Last but not least, a clear focus should be given to the results report in each step of the processing: a solution with a thorough statistical description of the achieved results is mandatory when the accuracy of the 3D reconstruction is a requirement of the work!

UAV applications

UAV applications are increasing in complexity and the delivery of products beyond the classical photogrammetric workflow is becoming more common. Automated DTM extraction, scene classification exploiting both images and point clouds as well as detection and tracking of features of interest are necessary in many applications. In this regard, software capable of automatically or semi-automatically generating this kind of information can be very helpful in terms of productivity.

The battery endurance and the productivity of UAVs have largely increased, with the consequence that bigger amounts of data are collected and heavier computations are needed to process these images. Luckily, most of the photogrammetric algorithms can be parallelised and software exploiting multi-core and graphical computing can therefore mitigate this problem. Recent software able to support the processing on clusters or on the cloud can represent another efficient solution to reduce the computational time and increase the productivity.

The UAV world is rapidly changing and the software dedicated to these applications too: the above mentioned points are some (but not all) of the aspects that should be considered when purchasing software for UAV data processing. How to find the most suitable solution depends on the capability to define our priorities and assess their implementation in the available software.

www.gim-international.com - Viernes 10 de Noviembre del 2017

Conoce más sobre mi

conocemas

cuento

 

JUEGOS

Denuncias Públicas

denuncias

Consultor Internacional

consultor

Sociedad Colombiana de Topógrafos

sct

Ingeniería Mundial y Geomática

ingenieria

Cosas que no entiendo

cosas

Invitaciones

invitaciones

Mis Acciones en la SCI

SCI ACC

Recomendados del mes

recomendado

palilibrio