Expressing barycentric coordinates symbolically in GEM#230
Expressing barycentric coordinates symbolically in GEM#230
Conversation
…ts of different sizes
…ion to handle nested tensor-product cells + added tests
…ation works properly for symbolic points
…ds to work on different array shapes
…xpressions representing point sets
…rately with singleton contractions
…ons work on GEM tensor expressions
…_barycentric_coordinates in tp cells
| if isinstance(expr, Node): | ||
| return expr | ||
| elif isinstance(expr, Number): | ||
| elif isinstance(expr, Number) or isinstance(expr, numpy.ndarray): |
There was a problem hiding this comment.
The type of the numpy.ndarry must be numeric as well. A similar change will be merged quite soon in this PR https://github.com/firedrakeproject/fiat/pull/172/changes#diff-060675ae9eab1d2f0d8121776a1c79b2f16a7a3c41c7a31476333f59e351cf60R1352-R1357
| def is_macrocell(self): | ||
| return any(c.is_macrocell() for c in self.cells) | ||
|
|
||
| @property |
| return ComponentTensor(IndexSum(expr, (k, )), (*i, *j)) | ||
|
|
||
| if self.shape[-1] == 1: | ||
| # Avoid generation of contraction loops over singleton dimensions (extent-1 indices) |
There was a problem hiding this comment.
This case is already simplified in IndexSum.__new__
https://github.com/firedrakeproject/fiat/blob/main/gem/gem.py#L883-L895
There was a problem hiding this comment.
I think you ran into issues that arise from the use of ComponentTensor within Indexed.__new__ when you tried to compile things on your own. We typically need to call gem.optimise.remove_componenttensors before compiling.
There was a problem hiding this comment.
I think you are right. I ended up calling impero_utils.preprocess_gem beforeimpero_utils.compile_gem and it does compile the expression without any errors. This makes the separate case I added redundant indeed.
| key = (key,) | ||
|
|
||
| # Expand ellipsis -> fill remaining dimensions with slice(None) | ||
| if Ellipsis in key: |
There was a problem hiding this comment.
Should we move the Ellipsis suport to gem.view?
| # Compute barycentric coordinates on each axis | ||
| for factor, s in zip(flat_factors, point_slices): |
There was a problem hiding this comment.
We have very similar patterns to this one, except that we try to avoid unflattening into simplices and work with the unflattened factors in the tensor product, flattening happens through recursion. See for instance: TensorProductCell.distance_to_point_l1() (which is very close to compute_barycentric_coordinates() but returns a single number), and TensorProductCell.get_entity_transform() (which returns the vector of coordinates mapped to a sub-entity)
There was a problem hiding this comment.
My reasoning for unflattening was so that I am able to call SimplicialComplex.compute_barycentric_coordinates and be able to stack barycentric coordinates in a consistent axis order.
To me, it seems like the recursive pattern in both distance_to_point_l1() and get_entity_transform() was adopted because the underlying operations are composable. In compute_axis_barycentric_coordinates, I am explicitly computing and storing the barycentric coordinates in axis block (such that the first set of barycentrics corresponds to the first axis etc. in the order of axes given by self.cells). This allows me then to consistently apply the facet permutation in Hypercube.compute_barycentric_coordinates when reordering the coordinates. Would this order still be preserved if I rely on recursive calls?
There was a problem hiding this comment.
I think you can compute the barycentric coordinates per unflatened factor stacking them as a numpy.array, and separately apply the facet permutation (as you are already doing). Then if you want to, you can cast the numpy.array as a ListTensor (note that as_gem will do that once GEM tabulations gets merged)
There was a problem hiding this comment.
Stacking the coordinates with numpy will effectively produce the unflattening that you need in order to apply the permutation.
There was a problem hiding this comment.
Yes the permutation happens inside Hypercube because that's where we define the unflattening_map which we currently don't do in TensorProductCell. This unflattening_map is what allows me to deduce the correspondence between each barycentric coordinate (in each axis block) and each facet of the cell. In general tensor product cells, this correspondence be not as straightforward when done this way. Perhaps there's a better solution if we are to generalise the facet-permutation to general TP cells.
I'll rewrite this bit using implicit recursion through unflattened factors and stack the results in a numpy array as you suggest. I'll see what it gives.
| bary_axis = factor.compute_barycentric_coordinates(axis_points, entity, rescale) | ||
| bary_coord_per_axis.append(bary_axis) | ||
|
|
||
| return bary_coord_per_axis |
There was a problem hiding this comment.
To be consistent with Simplex.compute_barycentric_coordinates we should be returning a numpy.ndarray
|
|
||
| NOTE: | ||
| Should we support computing barycentric coordinates on cell entities? | ||
| Should we support rescaling the barycentric coordinates? |
There was a problem hiding this comment.
The rescale option will scale the i-th barycentric coordinate by the distance from vertex i to face i. This is useful to compute more meaningful distances when more than one simplex is involved, e.g. when you have a subdivided triangle (a macroelement) and you trying to decide which subtriangle is closest to a point.
Adding support for this option should be a matter of propagating the argument into each factor of the TensorProductCell.
|
I think we should discuss a protocol to support symbolic computations with FIAT. FIAT is written in numpy, so one approach is to assume that all inputs and outputs are (or can be casted as) numpy arrays. This is the approach taken in #172. In particular, let's take as an example the main part of that PR, which is Lines 148 to 151 in 5c231cc There, finat converts a tensor-valued GEM expression (a list of coordinates of shape However, this means that we will have expanded expressions that involve the individual coordinates, that prevent GEM contractions along the coordinate |
|
To support GEM tensor expressions in barycentric computations, the main idea was to substitute NumPy operations with equivalent tensor operations that have implementations in both NumPy and GEM. This resulted in barycentric coordinate computations supporting inputs that are both NumPy arrays and GEM Nodes. Where array-style manipulations were required, equivalent functionality was introduced directly in GEM. An example of that is array slicing which appears in The only explicit GEM dependency added to FIAT appears in fiat/FIAT/reference_element.py Lines 1637 to 1646 in ddf6eb4 This raises a design question: whether it is acceptable to keep this GEM dependency in FIAT (since barycentric computations are inherently tied to reference cell geometry, which lives in FIAT), or whether it should instead be routed through FInAT to preserve the traditional separation between FIAT and GEM. Introducing an indirection layer in FInAT to dispatch back into FIAT for geometry-related symbolic operations would be a far more complex approach, at least in the context of the barycentric coordinate computation. |
This PR introduces new features that enable the barycentric coordinates of points expressed in cell-local reference coordinates to be expressed symbolically in GEM.
Previously, FIAT’s
compute_barycentric_coordinatesroutines assumed that input points are NumPy arrays and performed eager NumPy operations such asdotand array slicing. Passing agem.Nodeobject resulted in NumPy doing object manipulations rather than constructing meaningful GEM expressions.Broadly two sets of changes were made:
In
FIAT.reference_element.py: new methods were introduced to enable the computation of barycentric coordinates on tensor product cells which previously did not exist. Additionally, all operations within those methods support both the case where the input array of points is a NumPy array or a GEM tensor expression. Some operations such as slicing in the case of tensor product cells required introducinggemoperations which makes FIAT depended on GEM. These could potentially be removed if the equivalent NumPy operations can be re-implemented in GEM directly.In
gem.gem.py: minor changes were made to__matmul__. First, the method now supports multiplication between a GEM tensor and a NumPy array. Second, due to some compilation issues raised by loopy, a special case was introduced to handle tensor contractions over singleton dimensions. This essentially replaces the index summation with direct indexing which eliminates the need to build trivial contraction loops (iterating over indices of extent 1).